Editing Setup OpenStack with Virtuozzo 7

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 6: Line 6:
  
 
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration.  
 
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration.  
 +
Current limitations (bugs, not implemented or by design):
 +
#HA does not work.
 +
#Virtuozzo Storage is not supported for containers and VMs in cinder.
  
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.
+
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.
 +
 
 +
Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.
  
 
You need the following infrastructure to setup OpenStack with Virtuozzo 7:
 
You need the following infrastructure to setup OpenStack with Virtuozzo 7:
Line 18: Line 23:
 
  $ yum update -y
 
  $ yum update -y
  
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.
+
IP connection tracking should be enabled for CT0. Please do the following:
You can check you configuration with the following command:
+
#Open the file /etc/modprobe.d/vz.conf
 +
#Change the line <code>options nf_conntrack ip_conntrack_disable_ve0=1</code> to <code>options nf_conntrack ip_conntrack_disable_ve0=0</code>
 +
#Reboot the system
  
$ if=$(brctl show | grep '^br0' | awk ' { print $4 }') && addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') && gw=$(ip route | grep default | awk ' { print $3 } ') && echo "My interface is '$if', gateway is '$gw', IP address '$addr'"
+
Git must be installed on all your Virtuozzo nodes:
 
+
  $ yum install git -y
For instance you have the following output after execution the above script:
 
 
 
My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.
 
 
 
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE="br0" string from it:
 
...
 
ONBOOT=yes
 
IPADDR=192.168.190.134
 
GATEWAY=192.168.190.2
 
PREFIX=24
 
...
 
 
 
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.
 
 
 
  $ rm /etc/sysconfig/network-scripts/ifcfg-br0
 
 
Then restart network service:
 
 
 
$ systemctl restart network
 
  
 
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == <!--T:1-->
 
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == <!--T:1-->
Line 47: Line 35:
 
<!--T:3-->
 
<!--T:3-->
 
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.
 
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.
 
Git must be installed on all your Virtuozzo nodes:
 
$ yum install git -y
 
  
 
Clone virtuozzo scripts:
 
Clone virtuozzo scripts:
Line 65: Line 50:
 
Output will show the discovered clusters.
 
Output will show the discovered clusters.
 
Now you need to authenticate controller node on the Virtuozzo Storage cluster:
 
Now you need to authenticate controller node on the Virtuozzo Storage cluster:
  $ vstorage -c $CLUSTER_NAME auth-node
+
  $ vstorage -c $CLUSTER_NAME auth-node -P
 
Enter the virtuozzo storage cluster password and press Enter.  
 
Enter the virtuozzo storage cluster password and press Enter.  
 
Check the cluster properties:
 
Check the cluster properties:
Line 82: Line 67:
  
 
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==
 
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==
 
Git must be installed on all your Virtuozzo nodes:
 
$ yum install git -y
 
  
 
Clone Virtuozzo scripts to your COMPUTE node:
 
Clone Virtuozzo scripts to your COMPUTE node:
Line 98: Line 80:
 
Output will show the discovered clusters.
 
Output will show the discovered clusters.
 
Now you need to authenticate controller node on the Virtuozzo Storage cluster:
 
Now you need to authenticate controller node on the Virtuozzo Storage cluster:
  $ vstorage -c $CLUSTER_NAME auth-node
+
  $ vstorage -c $CLUSTER_NAME auth-node -P
 
Enter the virtuozzo storage cluster password and press Enter.  
 
Enter the virtuozzo storage cluster password and press Enter.  
 
Check the cluster properties:
 
Check the cluster properties:
Line 104: Line 86:
 
Output will show the virtuozzo storage cluster properties and state.
 
Output will show the virtuozzo storage cluster properties and state.
  
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md
+
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md
  
 
Example:
 
Example:
Line 123: Line 105:
 
  [libvirt]
 
  [libvirt]
 
  ...
 
  ...
  virt_type = parallels
+
  virt_type = kvm
 
  images_type = qcow2
 
  images_type = qcow2
  connection_uri = vz:///system
+
  connection_uri = parallels:///system
  
 
Delete the line:
 
Delete the line:
Line 148: Line 130:
 
# Run ./setup_devstack_vz7.sh with options you need.
 
# Run ./setup_devstack_vz7.sh with options you need.
  
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) ==  
+
== Install and configure a nova controller node on [[Virtuozzo]] 7 == <!--T:18-->
  
 +
<!--T:19-->
 +
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]
 +
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:
 +
 +
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop
  
* Install Virtuozzo Platform Release package to all Virtuozzo OpenStack nodes:
+
* Restart glance-api service:
 
 
$ yum install vz-platform-release
 
 
 
* Install packstack package:
 
 
 
$ yum install openstack-packstack
 
 
 
* Download sample Vz7 packstack answer file:
 
 
 
$ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-ocata.txt
 
 
 
* Edit vz7-packstack-ocata.txt enabling/disabling necessary services
 
* Replace all references to 'localhost' and '127.0.0.1' host addresses to correct valuses
 
* Set all passwords parameters containing PW_PLACEHOLDER string to some meaninful values
 
* If you are going to use Virtuozzo Storage as a Cinder Volume backend set the following parameters:
 
 
 
  # Enable Virtuozzo Storage
 
  CONFIG_VSTORAGE_ENABLED=y
 
 
 
  # VStorage cluster name.
 
  CONFIG_VSTORAGE_CLUSTER_NAME=
 
 
 
  # VStorage cluster password.
 
  CONFIG_VSTORAGE_CLUSTER_PASSWORD=
 
 
 
  # Bridge mappings
 
  CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet1:br-ex
 
 
 
  # Bridge interfaces
 
  CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0
 
 
 
  # Bridge mapping for compute node
 
  CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=extnet1:br-ex
 
 
 
* Then run packstack:
 
 
 
$ packstack --answer-file=vz7-packstack-ocata.txt
 
 
 
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:18-->
 
  
<!--T:19-->
+
systemctl restart openstack-glance-api.service
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]
 
  
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]
+
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]
 
* Unpack it
 
* Unpack it
  
Line 203: Line 150:
 
NOTE: this image was created for testing purposes only. Don't use it in production as is!
 
NOTE: this image was created for testing purposes only. Don't use it in production as is!
  
  $ glance image-create --name centos7-exe --disk-format ploop --min-ram 512 --min-disk 1 --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds
+
  glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds
 
 
$ glance image-create --name centos7-hvm --disk-format qcow2 --min-ram 1024 --min-disk 10 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2
 
 
 
* CentOS image one can get here [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 ]
 
  
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
+
== Install and configure a compute node on [[Virtuozzo]] 7 == <!--T:16-->
  
 
<!--T:17-->
 
<!--T:17-->
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.
+
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]
 
 
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]
 
 
* In addition to above instructions change /etc/nova/nova.conf:
 
* In addition to above instructions change /etc/nova/nova.conf:
  
Line 221: Line 162:
 
  vnc_keymap =
 
  vnc_keymap =
 
  force_raw_images = False
 
  force_raw_images = False
pointer_model = ps2mouse
 
  
 
  [libvirt]
 
  [libvirt]
 
  ...
 
  ...
vzstorage_mount_user = nova
 
vzstorage_mount_group = root
 
 
  virt_type = parallels
 
  virt_type = parallels
 
  images_type = ploop
 
  images_type = ploop
  connection_uri = vz:///system
+
  connection_uri = parallels+unix:///system
 
+
  inject_partition = -2
* Remove 'cpu_mode' parameter or set the following:
 
 
 
  cpu_mode = none
 
  
 
* Then restart nova-compute service:
 
* Then restart nova-compute service:
  
  $ systemctl restart openstack-nova-compute.service
+
  systemctl restart openstack-nova-compute.service
 
 
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'
 
 
 
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
 
 
 
<!--T:17-->
 
If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.
 
 
 
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]
 
* In addition to above instructions change /etc/cinder/cinder.conf:
 
 
 
[DEFAULT]
 
...
 
enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2
 
...
 
 
 
[vstorage-ploop]
 
vzstorage_default_volume_format = ploop
 
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
 
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
 
volume_backend_name = vstorage-ploop
 
 
 
[vstorage-qcow2]
 
vzstorage_default_volume_format = qcow2
 
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
 
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
 
volume_backend_name = vstorage-qcow2
 
 
 
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:
 
 
 
YOUR-CLUSTER-NAME ["-u", "cinder", "-g", "root", "-m", "0770"]
 
 
 
* Create two new volume types:
 
 
 
$ cinder type-create vstorage-qcow2
 
$ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2
 
 
 
$ cinder type-create vstorage-ploop
 
$ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop
 
 
 
* Create directory for storage logs:
 
 
 
$ mkdir /var/log/pstorage
 
 
 
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:
 
 
 
$ echo $CLUSTER_PASSWD | vstorage auth-node -c YOUR-CLUSTER-NAME -P
 
 
* Then restart cinder services:
 
 
 
$ systemctl restart openstack-cinder-api
 
$ systemctl restart openstack-cinder-scheduler
 
$ systemctl restart openstack-cinder-volume
 
 
 
== How to create a new ploop image ready to upload to Glance == <!--T:17-->
 
 
 
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal
 
 
 
$ ct=centos-7
 
 
 
* Create a new container based on necessary os distribution
 
 
 
$ prlctl create glance-$ct --vmtype ct --ostemplate $ct
 
 
 
* Set IP address and DNS to be able to connect to internet from the container
 
 
 
$ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR
 
 
 
* Add additional network adapter
 
 
 
$ prlctl set glance-$ct --device-add net --network Bridged --dhcp on
 
 
 
* Start the container
 
 
 
$ prlctl start glance-$ct
 
 
 
* Install cloud-init packet
 
 
 
$ prlctl exec glance-$ct yum install cloud-init -y
 
 
 
* Stop the container and mount it
 
 
 
$ prlctl stop glance-$ct
 
$ prlctl mount glance-$ct
 
 
 
* Store the container uuid
 
 
 
$ uuid=$(vzlist glance-$ct | awk ' NR>1 { print $1 }')
 
 
 
* Remove the following modules from cloud.cfg
 
 
 
$ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/cloud.cfg
 
$ sed -i '/- resizefs/d' /vz/root/$uuid/etc/cloud/cloud.cfg
 
 
 
* Prepare network scripts
 
 
 
cat > /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 << _EOF
 
DEVICE=eth0
 
ONBOOT=yes
 
NM_CONTROLLED=no
 
BOOTPROTO=dhcp
 
_EOF
 
 
 
* If you need more than one network adapters within a container, make as many copies as you need
 
 
 
$ cp /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
 
$ sed -i '/eth0/eth1' /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
 
 
 
* Perform some cleanup
 
 
 
$ rm -f /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0*
 
$ rm -f /vz/root/$uuid/etc/resolv.conf
 
 
 
* Create ploop disk and copy files
 
 
 
$ mkdir /tmp/ploop-$ct
 
$ ploop init -s 950M /tmp/ploop-$ct/$ct.hds
 
$ mkdir /tmp/ploop-$ct/dst
 
$ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml
 
$ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/
 
$ ploop umount -m /tmp/ploop-$ct/dst/
 
 
 
* Unmount the container
 
 
 
$ prlctl umount glance-$ct
 
 
 
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance
 
  
 
== See also == <!--T:100-->
 
== See also == <!--T:100-->
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]
+
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]
+
* [https://docs.openvz.org/ Virtuozzo documentation]
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]
 
* [https://docs.openvz.org/ Virtuozzo Documentation]
 
 
* [[Virtuozzo ecosystem]]
 
* [[Virtuozzo ecosystem]]
  

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)

Template used on this page: