Difference between revisions of "Setup OpenStack with Virtuozzo 7"
Line 125: | Line 125: | ||
virt_type = parallels | virt_type = parallels | ||
images_type = qcow2 | images_type = qcow2 | ||
− | connection_uri = | + | connection_uri = vz:///system |
Delete the line: | Delete the line: | ||
Line 232: | Line 232: | ||
virt_type = parallels | virt_type = parallels | ||
images_type = ploop | images_type = ploop | ||
− | connection_uri = | + | connection_uri = vz:///system |
* Remove 'cpu_mode' parameter or set the following: | * Remove 'cpu_mode' parameter or set the following: |
Revision as of 14:39, 1 March 2017
<translate> This article describes how to install OpenStack on Virtuozzo 7.
Contents
- 1 Introduction
- 2 Prerequisites
- 3 Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*)
- 4 Setup OpenStack Compute Node (*Developer/POC Setup*)
- 5 How to change Virtualization Type to Virtual Machines on the Compute Node
- 6 How to redeploy OpenStack on the same nodes
- 7 Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*)
- 8 Install and configure a nova controller node on Virtuozzo 7 (*Production Setup*)
- 9 Install and configure a compute node on Virtuozzo 7 (*Production Setup*)
- 10 Install and configure a block storage node on Virtuozzo 7 (*Production Setup*)
- 11 How to create a new image ploop image ready to upload to Glance
- 12 See also
Introduction
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration.
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.
You need the following infrastructure to setup OpenStack with Virtuozzo 7:
- controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.
- compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.
Prerequisites
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts
$ yum update -y
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0. You can check you configuration with the following command:
$ if=$(brctl show | grep '^br0' | awk ' { print $4 }') && addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') && gw=$(ip route | grep default | awk ' { print $3 } ') && echo "My interface is '$if', gateway is '$gw', IP address '$addr'"
For instance you have the following output after execution the above script:
My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE="br0" string from it:
... ONBOOT=yes IPADDR=192.168.190.134 GATEWAY=192.168.190.2 PREFIX=24 ...
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.
$ rm /etc/sysconfig/network-scripts/ifcfg-br0
Then restart network service:
$ systemctl restart network
Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*)
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.
Git must be installed on all your Virtuozzo nodes:
$ yum install git -y
Clone virtuozzo scripts:
$ cd /vz $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts $ cd virtuozzo-openstack-scripts
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster.
Setup Virtuozzo Storage client:
$ yum install vstorage-client -y
Check cluster discovery is working fine first:
$ vstorage discover
Output will show the discovered clusters. Now you need to authenticate controller node on the Virtuozzo Storage cluster:
$ vstorage -c $CLUSTER_NAME auth-node
Enter the virtuozzo storage cluster password and press Enter. Check the cluster properties:
$ vstorage -c $CLUSTER_NAME top
Output will show Virtuozzo storage cluster properties and state.
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md
Example:
$ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool "start=10.24.41.151,end=10.24.41.199" --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL
Run the script on your CONTROLLER node and follow instructions (if any):
$ ./setup_devstack_vz7.sh
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!
Setup OpenStack Compute Node (*Developer/POC Setup*)
Git must be installed on all your Virtuozzo nodes:
$ yum install git -y
Clone Virtuozzo scripts to your COMPUTE node:
$ cd /vz $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts $ cd /vz/virtuozzo-openstack-scripts
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. Setup Virtuozzo Storage client:
$ yum install vstorage-client -y
Check cluster discovery is working fine first:
$ vstorage discover
Output will show the discovered clusters. Now you need to authenticate controller node on the Virtuozzo Storage cluster:
$ vstorage -c $CLUSTER_NAME auth-node
Enter the virtuozzo storage cluster password and press Enter. Check the cluster properties:
$ vstorage -c $CLUSTER_NAME top
Output will show the virtuozzo storage cluster properties and state.
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md
Example:
$ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1! --use_provider_network true --mode COMPUTE --controller 10.24.41.25
Run the script on your COMPUTE node and follow instructions (if any):
$ ./setup_devstack_vz7.sh
How to change Virtualization Type to Virtual Machines on the Compute Node
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.
Open nova configuration file:
$ vi /etc/nova/nova.conf
Change the following lines:
[libvirt] ... virt_type = parallels images_type = qcow2 connection_uri = vz:///system
Delete the line:
inject_partition = -2
Save the file.
Restart nova-compute service:
$ su stack $ screen -r
Press Ctrl-c
$ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' & echo $! >/vz/stack/status/stack/n-cpu.pid; fg || echo "n-cpu failed to start" | tee "/vz/stack/status/stack/n-cpu.failure"
To exit from screen session: Press Ctrl+a+d
How to redeploy OpenStack on the same nodes
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:
cd /vz/virtuozzo-openstack-scripts
git pull
- Run ./setup_devstack_vz7.sh with options you need.
Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*)
- Create a new repo file:
cat > /etc/yum.repos.d/virtuozzo-extra.repo << _EOF [virtuozzo-extra] name=Virtuozzo Extra baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/ enabled=1 gpgcheck=1 priority=50 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7 _EOF
- Add RDO repository:
$ yum install https://rdoproject.org/repos/rdo-release.rpm
- Install packstack package:
$ yum install openstack-packstack
- Download sample Vz7 packstack answer file:
$ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt
- Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:
CONFIG_CONTROLLER_HOST CONFIG_COMPUTE_HOSTS CONFIG_NETWORK_HOSTS CONFIG_AMQP_HOST CONFIG_MARIADB_HOST CONFIG_REDIS_HOST
- Change CONFIG_DEFAULT_PASSWORD parameter!!!
- Then run packstack:
$ packstack --answer-file vz7-packstack-sample.txt
Install and configure a nova controller node on Virtuozzo 7 (*Production Setup*)
- Follow instructions on OpenStack.org
- Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop
- Restart glance-api service:
systemctl restart openstack-glance-api.service
- Download the container image
- Unpack it
$ tar -xzvf centos7-exe.hds.tar.gz
- Upload the image to glance:
NOTE: this image was created for testing purposes only. Don't use it in production as is!
$ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds
$ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2
- CentOS image one can get from here
Install and configure a compute node on Virtuozzo 7 (*Production Setup*)
- Follow instructions on OpenStack.org
- In addition to above instructions change /etc/nova/nova.conf:
[DEFAULT] ... vnc_keymap = force_raw_images = False pointer_model = ps2mouse
[libvirt] ... vzstorage_mount_group = root virt_type = parallels images_type = ploop connection_uri = vz:///system
- Remove 'cpu_mode' parameter or set the following:
cpu_mode=none
- Then restart nova-compute service:
$ systemctl restart openstack-nova-compute.service
- If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'
Install and configure a block storage node on Virtuozzo 7 (*Production Setup*)
- Follow instructions on OpenStack.org
- In addition to above instructions change /etc/cinder/cinder.conf:
[DEFAULT] ... enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2 ...
[vstorage-ploop] vzstorage_default_volume_format = ploop vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver volume_backend_name = vstorage-ploop
[vstorage-qcow2] vzstorage_default_volume_format = qcow2 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver volume_backend_name = vstorage-qcow2
- Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:
YOUR-CLUSTER-NAME ["-u", "cinder", "-g", "root", "-m", "0770"]
- Create two new volume types:
$ cinder type-create vstorage-qcow2 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2
$ cinder type-create vstorage-ploop $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop
- Create directory for storage logs:
$ mkdir /var/log/pstorage
- Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:
$ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P
- Then restart cinder services:
$ systemctl restart openstack-cinder-api $ systemctl restart openstack-cinder-scheduler $ systemctl restart openstack-cinder-volume
How to create a new image ploop image ready to upload to Glance
- Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal
$ ct=centos-7
- Create a new container based on necessary os distribution
$ prlctl create glance-$ct --vmtype ct --ostemplate $ct
- Set IP address and DNS to be able to connect to internet from the container
$ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR
- Add additional network adapter
$ prlctl set glance-$ct --device-add net --network Bridged --dhcp on
- Start the container
$ prlctl start glance-$ct
- Install cloud-init packet
$ prlctl exec glance-$ct yum install cloud-init -y
- Remove the following modules from cloud.cfg
$ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg
- Prepare network scripts
cat > /etc/sysconfig/network-scripts/ifcfg-eth0 << _EOF DEVICE=eth0 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp _EOF
- If you need more than one network adapters withing a container, make as many copies as you need
$ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1
- Perform some cleanup
$ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0* $ rm -f /etc/resolv.conf
- Stop the container
$ prlctl stop glance-$ct
- Create ploop disk and copy files
$ mkdir /tmp/ploop-$ct $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds $ mkdir /tmp/ploop-$ct/dst $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml $ prlctl mount glance-$ct $ id=$(vzlist glance-$ct | awk ' NR>1 { print $1 }') $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/ $ prlctl umount glance-$ct $ ploop umount -m /tmp/ploop-$ct/dst/
- Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance
See also
- Controller Node Installation Guide
- Compute Node Installation Guide
- OpenStack Installation Guide
- Virtuozzo Documentation
- Virtuozzo ecosystem
</translate>