74
edits
Changes
→Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*)
{{Virtuozzo}}
<translate>
<!--T:1-->
This article describes how to install OpenStack on [[Virtuozzo]] 7.
== Introduction ==
You need to install and update your Virtuozzo nodes first. Install EPEL repoVirtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts $ yum update -y If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.You can check you configuration with the following command: $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') && addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') && gw=$(ip route | grep default | awk ' { print $3 } ') && echo "My interface is '$if', gateway is '$gw', IP address '$addr'" For instance you have the following output after execution the above script: My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'. Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE="br0" string from it: ... ONBOOT=yes IPADDR=192.168.190.134 GATEWAY=192.168.190.2 PREFIX=24 ... Remove /etc/sysconfig/network-scripts/ifcfg-br0 file. $ rm /etc/sysconfig/network-scripts/ifcfg-br0
Then restart network service: $ systemctl restart network == Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == <!--T:1--> <!--T:3-->You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section. Git must be installed on all your Virtuozzo nodes: $ yum install git -y Clone virtuozzo scripts: $ cd /vz $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts $ cd virtuozzo-openstack-scripts If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. Setup Virtuozzo Storage client: $ yum install vstorage-client -yCheck cluster discovery is working fine first: $ vstorage discoverOutput will show the discovered clusters.Now you need to authenticate controller node on the Virtuozzo Storage cluster: $ vstorage -c $CLUSTER_NAME auth-nodeEnter the virtuozzo storage cluster password and press Enter. Check the cluster properties: $ vstorage -c $CLUSTER_NAME topOutput will show Virtuozzo storage cluster properties and state. Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md Example: $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool "start=10.24.41.151,end=10.24.41.199" --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL Run the script on your CONTROLLER node and follow instructions (if any): $ ./setup_devstack_vz7.sh Installation can take up to 30 minutes depending on your Internet connection speed. Finished! == Setup OpenStack Compute Node (*Developer/POC Setup*) == Git must be installed on all your Virtuozzo nodes: $ yum install git -y http Clone Virtuozzo scripts to your COMPUTE node: $ cd /vz $ git clone https://fedoragithub.com/virtuozzo/virtuozzo-openstack-scripts $ cd /vz/virtuozzo-openstack-scripts If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. Setup Virtuozzo Storage client: $ yum install vstorage-client -yCheck cluster discovery is working fine first: $ vstorage discoverOutput will show the discovered clusters.Now you need to authenticate controller node on the Virtuozzo Storage cluster: $ vstorage -c $CLUSTER_NAME auth-mirror01nodeEnter the virtuozzo storage cluster password and press Enter.rbcCheck the cluster properties: $ vstorage -c $CLUSTER_NAME topOutput will show the virtuozzo storage cluster properties and state. Configure the script on the COMPUTE node. Please read script description here https://github.rucom/virtuozzo/pubvirtuozzo-openstack-scripts/epelblob/master/epelREADME.md Example: $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1! --releaseuse_provider_network true -latest-7mode COMPUTE --controller 10.24.41.25 Run the script on your COMPUTE node and follow instructions (if any): $ ./setup_devstack_vz7.noarchsh == How to change Virtualization Type to Virtual Machines on the Compute Node == If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.rpm
* Then restart cinder services: $ systemctl restart openstack-cinder-api $ su stack systemctl restart openstack-c "cd ~ && git clone gitcinder-scheduler $ systemctl restart openstack-cinder-volume == How to create a new ploop image ready to upload to Glance == <!--T:17--> * Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal $ ct=centos-7 * Create a new container based on necessary os distribution $ prlctl create glance-$ct --vmtype ct --ostemplate $ct * Set IP address and DNS to be able to connect to internet from the container $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR * Add additional network adapter $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on * Start the container $ prlctl start glance-$ct * Install cloud-init packet $ prlctl exec glance-$ct yum install cloud-init -y * Stop the container and mount it $ prlctl stop glance-$ct $ prlctl mount glance-$ct * Store the container uuid $ uuid=$(vzlist glance-$ct | awk ' NR>1 { print $1 }') * Remove the following modules from cloud.cfg $ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/gitcloud.openstack.orgcfg $ sed -i '/openstack-devresizefs/d' /vz/root/$uuid/devstack"etc/cloud/cloud.cfg
_EOF
$ sed -i scp /"#unix_sock_group = \"libvirt\""vz/"unix_sock_group = \"stack\""root/ $uuid/etc/libvirtsysconfig/libvirtd.conf $ sed network-scripts/ifcfg-i seth0 /"#unix_sock_ro_perms = \"0777\""vz/"unix_sock_ro_perms = \"0777\""root/ $uuid/etc/libvirtsysconfig/libvirtd.conf $ sed network-i sscripts/"#unix_sock_rw_perms = \"0770\""/"unix_sock_rw_perms = \"0770\""/ /etc/libvirt/libvirtd.confifcfg-eth1 $ sed -i s'/"#unix_sock_dir = \"\eth0/var\eth1' /run\vz/libvirt\""/"unix_sock_dir = \"\/var\/run\/libvirt\""root/ $uuid/etc/libvirtsysconfig/libvirtd.conf $ sed network-i s/"#auth_unix_ro = \"none\""/"auth_unix_ro = \"none\""/ /etc/libvirtscripts/libvirtd.conf $ sed ifcfg-i s/"#auth_unix_rw = \"none\""/"auth_unix_rw = \"none\""/ /etc/libvirt/libvirtd.confeth1
$ su stack rm -c "cd ~ && git clone git:f /vz/git.openstack.orgroot/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0* $ rm -f /vz/root/$uuid/openstacketc/nova"resolv.conf
$ sed mkdir /tmp/ploop-i $ct $ ploop init -e "s950M /tmp/ploop-$ct/MIN_LIBVIRT_PARALLELS_VERSION = (1, 2, 12)$ct.hds $ mkdir /tmp/MIN_LIBVIRT_PARALLELS_VERSION = (1, 2, 8)ploop-$ct/" dst ~stack$ ploop mount -m /novatmp/novaploop-$ct/virtdst /libvirttmp/driverploop-$ct/DiskDescriptor.pyxml $ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/ $ ploop umount -m /tmp/ploop-$ct/dst/
$ su stack prlctl umount glance-c "~/stack.sh"$ct
[[Category: HOWTO]]