Difference between revisions of "Setup OpenStack with Virtuozzo 7"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
Line 133: Line 133:
  
 
<!--T:19-->
 
<!--T:19-->
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]
+
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/nova-controller-install.html OpenStack.org]
 
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:
 
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:
 
   
 
   
Line 150: Line 150:
 
NOTE: this image was created for testing purposes only. Don't use it in production as is!
 
NOTE: this image was created for testing purposes only. Don't use it in production as is!
  
  glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds
+
  $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds
  
 
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
 
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
  
 
<!--T:17-->
 
<!--T:17-->
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]
+
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/nova-compute-install.html OpenStack.org]
 
* In addition to above instructions change /etc/nova/nova.conf:
 
* In addition to above instructions change /etc/nova/nova.conf:
  
Line 165: Line 165:
 
  [libvirt]
 
  [libvirt]
 
  ...
 
  ...
 +
vzstorage_mount_group = root
 
  virt_type = parallels
 
  virt_type = parallels
 
  images_type = ploop
 
  images_type = ploop
Line 171: Line 172:
 
* Then restart nova-compute service:
 
* Then restart nova-compute service:
  
  systemctl restart openstack-nova-compute.service
+
  $ systemctl restart openstack-nova-compute.service
  
 
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'
 
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'
 +
 +
 +
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
 +
 +
<!--T:17-->
 +
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]
 +
* In addition to above instructions change /etc/cinder/cinder.conf:
 +
 +
[DEFAULT]
 +
...
 +
enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2
 +
...
 +
 +
[vstorage-ploop]
 +
vzstorage_default_volume_format = parallels
 +
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
 +
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
 +
volume_backend_name = vstorage-ploop
 +
 +
[vstorage-qcow2]
 +
vzstorage_default_volume_format = qcow2
 +
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
 +
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
 +
volume_backend_name = vstorage-qcow2
 +
 +
* Create etc/cinder/vzstorage-shares-vstorage.conf with the following content:
 +
 +
YOUR-CLUSTER-NAME ["-u", "cinder", "-g", "root", "-m", "0770"]
 +
 +
* Create two new volume types:
 +
 +
$ cinder type-create vstorage-qcow2
 +
$ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2
 +
 +
$ cinder type-create vstorage-ploop
 +
$ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop
 +
 +
* Then restart cinder services:
 +
 +
$ systemctl restart openstack-cinder-api
 +
$ systemctl restart openstack-cinder-scheduler
 +
$ systemctl restart openstack-cinder-volume
 +
  
 
== See also == <!--T:100-->
 
== See also == <!--T:100-->
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]
+
* [http://docs.openstack.org/mitaka/install-guide-rdo/environment-packages.html OpenStack installation guide]
 
* [https://docs.openvz.org/ Virtuozzo documentation]
 
* [https://docs.openvz.org/ Virtuozzo documentation]
 
* [[Virtuozzo ecosystem]]
 
* [[Virtuozzo ecosystem]]

Revision as of 14:15, 30 August 2016

<translate> This article describes how to install OpenStack on Virtuozzo 7.

Introduction

Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. Current limitations (bugs, not implemented or by design):

  1. HA does not work.
  2. Virtuozzo Storage is not supported for containers and VMs in cinder.

This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.

Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.

You need the following infrastructure to setup OpenStack with Virtuozzo 7:

  1. controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.
  2. compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.

Prerequisites

You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts

$ yum update -y

IP connection tracking should be enabled for CT0. Please do the following:

  1. Open the file /etc/modprobe.d/vz.conf
  2. Change the line options nf_conntrack ip_conntrack_disable_ve0=1 to options nf_conntrack ip_conntrack_disable_ve0=0
  3. Reboot the system

Git must be installed on all your Virtuozzo nodes:

$ yum install git -y

Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*)

You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.

Clone virtuozzo scripts:

$ cd /vz
$ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts
$ cd virtuozzo-openstack-scripts

If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster.

Setup Virtuozzo Storage client:

$ yum install vstorage-client -y

Check cluster discovery is working fine first:

$ vstorage discover

Output will show the discovered clusters. Now you need to authenticate controller node on the Virtuozzo Storage cluster:

$ vstorage -c $CLUSTER_NAME auth-node -P

Enter the virtuozzo storage cluster password and press Enter. Check the cluster properties:

$ vstorage -c $CLUSTER_NAME top

Output will show Virtuozzo storage cluster properties and state.

Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md

Example:

$ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool "start=10.24.41.151,end=10.24.41.199" --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL 

Run the script on your CONTROLLER node and follow instructions (if any):

$ ./setup_devstack_vz7.sh

Installation can take up to 30 minutes depending on your Internet connection speed. Finished!

Setup OpenStack Compute Node (*Developer/POC Setup*)

Clone Virtuozzo scripts to your COMPUTE node:

$ cd /vz
$ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts
$ cd /vz/virtuozzo-openstack-scripts

If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. Setup Virtuozzo Storage client:

$ yum install vstorage-client -y

Check cluster discovery is working fine first:

$ vstorage discover

Output will show the discovered clusters. Now you need to authenticate controller node on the Virtuozzo Storage cluster:

$ vstorage -c $CLUSTER_NAME auth-node -P

Enter the virtuozzo storage cluster password and press Enter. Check the cluster properties:

$ vstorage -c $CLUSTER_NAME top

Output will show the virtuozzo storage cluster properties and state.

Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md

Example:

$ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 

Run the script on your COMPUTE node and follow instructions (if any):

$ ./setup_devstack_vz7.sh

How to change Virtualization Type to Virtual Machines on the Compute Node

If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.

Open nova configuration file:

$ vi /etc/nova/nova.conf

Change the following lines:

[libvirt]
...
virt_type = kvm
images_type = qcow2
connection_uri = parallels:///system

Delete the line:

inject_partition = -2

Save the file.

Restart nova-compute service:

$ su stack
$ screen -r

Press Ctrl-c

$ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' & echo $! >/vz/stack/status/stack/n-cpu.pid; fg || echo "n-cpu failed to start" | tee "/vz/stack/status/stack/n-cpu.failure"

To exit from screen session: Press Ctrl+a+d

How to redeploy OpenStack on the same nodes

Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:

  1. cd /vz/virtuozzo-openstack-scripts
  2. git pull
  3. Run ./setup_devstack_vz7.sh with options you need.

Install and configure a nova controller node on Virtuozzo 7 (*Production Setup*)

  • Follow instructions on OpenStack.org
  • Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop
  • Restart glance-api service:
systemctl restart openstack-glance-api.service
  • Download the container image
  • Unpack it
$ tar -xzvf centos7-exe.hds.tar.gz
  • Upload the image to glance:

NOTE: this image was created for testing purposes only. Don't use it in production as is!

$ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds

Install and configure a compute node on Virtuozzo 7 (*Production Setup*)

  • Follow instructions on OpenStack.org
  • In addition to above instructions change /etc/nova/nova.conf:
[DEFAULT]
...
vnc_keymap =
force_raw_images = False
[libvirt]
...
vzstorage_mount_group = root
virt_type = parallels
images_type = ploop
connection_uri = parallels:///system

  • Then restart nova-compute service:
$ systemctl restart openstack-nova-compute.service
  • If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'


Install and configure a block storage node on Virtuozzo 7 (*Production Setup*)

  • Follow instructions on OpenStack.org
  • In addition to above instructions change /etc/cinder/cinder.conf:
[DEFAULT]
...
enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2
...
[vstorage-ploop]
vzstorage_default_volume_format = parallels
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
volume_backend_name = vstorage-ploop
[vstorage-qcow2]
vzstorage_default_volume_format = qcow2
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
volume_backend_name = vstorage-qcow2
  • Create etc/cinder/vzstorage-shares-vstorage.conf with the following content:
YOUR-CLUSTER-NAME ["-u", "cinder", "-g", "root", "-m", "0770"]
  • Create two new volume types:
$ cinder type-create vstorage-qcow2
$ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2
$ cinder type-create vstorage-ploop
$ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop 

  • Then restart cinder services:
$ systemctl restart openstack-cinder-api
$ systemctl restart openstack-cinder-scheduler
$ systemctl restart openstack-cinder-volume


See also

</translate>