Difference between revisions of "Setup OpenStack with Virtuozzo 7"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*))
 
(91 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
{{Virtuozzo}}
 
{{Virtuozzo}}
 
 
<translate>
 
<translate>
 
<!--T:1-->
 
<!--T:1-->
This howto describes steps by step installation of OpenStack devstack with [[Virtuozzo]] 7.
+
This article describes how to install OpenStack on [[Virtuozzo]] 7.
 +
== Introduction ==
 +
 
 +
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration.
 +
 
 +
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.
 +
 
 +
You need the following infrastructure to setup OpenStack with Virtuozzo 7:
 +
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.
 +
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.
 +
 
 +
== Prerequisites ==
 +
 
 +
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts
 +
$ yum update -y
 +
 
 +
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.
 +
You can check you configuration with the following command:
 +
 
 +
$ if=$(brctl show | grep '^br0' | awk ' { print $4 }') && addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') && gw=$(ip route | grep default | awk ' { print $3 } ') && echo "My interface is '$if', gateway is '$gw', IP address '$addr'"
 +
 
 +
For instance you have the following output after execution the above script:
 +
 
 +
My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.
 +
 
 +
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE="br0" string from it:
 +
...
 +
ONBOOT=yes
 +
IPADDR=192.168.190.134
 +
GATEWAY=192.168.190.2
 +
PREFIX=24
 +
...
 +
 
 +
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.
 +
 
 +
$ rm /etc/sysconfig/network-scripts/ifcfg-br0
 +
 +
Then restart network service:
 +
 
 +
$ systemctl restart network
  
<!--T:2-->
+
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == <!--T:1-->
Execute the following commands within you Virtuozzo 7 host as root:
 
  
 
<!--T:3-->
 
<!--T:3-->
Install RDO repo:
+
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.
 +
 
 +
Git must be installed on all your Virtuozzo nodes:
 +
$ yum install git -y
 +
 
 +
Clone virtuozzo scripts:
 +
 
 +
$ cd /vz
 +
$ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts
 +
$ cd virtuozzo-openstack-scripts
 +
 
 +
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster.
 +
 
 +
Setup Virtuozzo Storage client:
 +
$ yum install vstorage-client -y
 +
Check cluster discovery is working fine first:
 +
$ vstorage discover
 +
Output will show the discovered clusters.
 +
Now you need to authenticate controller node on the Virtuozzo Storage cluster:
 +
$ vstorage -c $CLUSTER_NAME auth-node
 +
Enter the virtuozzo storage cluster password and press Enter.
 +
Check the cluster properties:
 +
$ vstorage -c $CLUSTER_NAME top
 +
Output will show Virtuozzo storage cluster properties and state.
 +
 
 +
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md
 +
 
 +
Example:
 +
$ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool "start=10.24.41.151,end=10.24.41.199" --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL
 +
 
 +
Run the script on your CONTROLLER node and follow instructions (if any):
 +
$ ./setup_devstack_vz7.sh
 +
 
 +
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!
 +
 
 +
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==
 +
 
 +
Git must be installed on all your Virtuozzo nodes:
 +
$ yum install git -y
 +
 
 +
Clone Virtuozzo scripts to your COMPUTE node:
 +
$ cd /vz
 +
$ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts
 +
$ cd /vz/virtuozzo-openstack-scripts
 +
 
 +
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster.
 +
Setup Virtuozzo Storage client:
 +
$ yum install vstorage-client -y
 +
Check cluster discovery is working fine first:
 +
$ vstorage discover
 +
Output will show the discovered clusters.
 +
Now you need to authenticate controller node on the Virtuozzo Storage cluster:
 +
$ vstorage -c $CLUSTER_NAME auth-node
 +
Enter the virtuozzo storage cluster password and press Enter.
 +
Check the cluster properties:
 +
$ vstorage -c $CLUSTER_NAME top
 +
Output will show the virtuozzo storage cluster properties and state.
 +
 
 +
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md
 +
 
 +
Example:
 +
$ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25
 +
 
 +
Run the script on your COMPUTE node and follow instructions (if any):
 +
$ ./setup_devstack_vz7.sh
 +
 
 +
== How to change Virtualization Type to Virtual Machines on the Compute Node ==
 +
 
 +
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.
 +
 
 +
Open nova configuration file:
 +
$ vi /etc/nova/nova.conf
 +
 
 +
Change the following lines:
 +
 
 +
[libvirt]
 +
...
 +
virt_type = parallels
 +
images_type = qcow2
 +
connection_uri = vz:///system
 +
 
 +
Delete the line:
 +
inject_partition = -2
 +
 
 +
Save the file.
 +
 
 +
Restart nova-compute service:
 +
$ su stack
 +
$ screen -r
 +
Press Ctrl-c
 +
$ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' & echo $! >/vz/stack/status/stack/n-cpu.pid; fg || echo "n-cpu failed to start" | tee "/vz/stack/status/stack/n-cpu.failure"
 +
 
 +
To exit from screen session:
 +
Press Ctrl+a+d
 +
 
 +
== How to redeploy OpenStack on the same nodes ==
  
<!--T:4-->
+
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:
$ yum install -y https://rdoproject.org/repos/rdo-release.rpm
+
# <code>cd /vz/virtuozzo-openstack-scripts</code>
 +
# <code>git pull</code>
 +
# Run ./setup_devstack_vz7.sh with options you need.
  
<!--T:5-->
+
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) ==
Install EPEL repo:
+
 
   
+
 
  $ yum install -y http://fedora-mirror01.rbc.ru/pub/epel//epel-release-latest-7.noarch.rpm
+
* Install Virtuozzo Platform Release package to all Virtuozzo OpenStack nodes:
 +
 
 +
$ yum install vz-platform-release
 +
 
 +
* Install packstack package:
 +
 
 +
$ yum install openstack-packstack
 +
 
 +
* Download sample Vz7 packstack answer file:
 +
 
 +
$ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-ocata.txt
 +
 
 +
* Edit vz7-packstack-ocata.txt enabling/disabling necessary services
 +
* Replace all references to 'localhost' and '127.0.0.1' host addresses to correct valuses
 +
* Set all passwords parameters containing PW_PLACEHOLDER string to some meaninful values
 +
* If you are going to use Virtuozzo Storage as a Cinder Volume backend set the following parameters:
 +
 
 +
  # Enable Virtuozzo Storage
 +
  CONFIG_VSTORAGE_ENABLED=y
 +
 
 +
  # VStorage cluster name.
 +
  CONFIG_VSTORAGE_CLUSTER_NAME=
 +
 
 +
  # VStorage cluster password.
 +
  CONFIG_VSTORAGE_CLUSTER_PASSWORD=
 +
 
 +
  # Bridge mappings
 +
  CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet1:br-ex
 +
 
 +
  # Bridge interfaces
 +
  CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0
 +
 
 +
  # Bridge mapping for compute node
 +
  CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=extnet1:br-ex
 +
 
 +
* Then run packstack:
 +
 
 +
$ packstack --answer-file=vz7-packstack-ocata.txt
 +
 
 +
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:18-->
 +
 
 +
<!--T:19-->
 +
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]
 +
 
 +
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]
 +
* Unpack it
 +
 
 +
$ tar -xzvf centos7-exe.hds.tar.gz
 +
 
 +
* Upload the image to glance:
 +
NOTE: this image was created for testing purposes only. Don't use it in production as is!
 +
 
 +
  $ glance image-create --name centos7-exe --disk-format ploop --min-ram 512 --min-disk 1 --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds
 +
 
 +
  $ glance image-create --name centos7-hvm --disk-format qcow2 --min-ram 1024 --min-disk 10 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2
 +
 
 +
* CentOS image one can get here [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 ]
  
<!--T:6-->
+
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
Install required packages:
 
$ yum install -y git patch redhat-lsb-core euca2ools mysql-connector-python scsi-target-utils
 
  
<!--T:7-->
+
<!--T:17-->
Reinstall http and mod_wsgi:
+
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.
  
<!--T:8-->
+
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]
$ yum remove -y httpd httpd-tools mod_wsgi
+
* In addition to above instructions change /etc/nova/nova.conf:
$ yum install -y httpd
 
  
<!--T:9-->
+
[DEFAULT]
Enable IP connection tracking for CT0:
+
...
 +
vnc_keymap =
 +
force_raw_images = False
 +
pointer_model = ps2mouse
  
  <!--T:10-->
+
  [libvirt]
$ echo -ne "options nf_conntrack ip_conntrack_disable_ve0=0\n" > /etc/modprobe.d/vz.conf
+
...
  $ echo -ne "options nf_conntrack ip_conntrack_disable_ve0=0\n" > /etc/modprobe.d/parallels.conf
+
  vzstorage_mount_user = nova
 +
vzstorage_mount_group = root
 +
virt_type = parallels
 +
images_type = ploop
 +
connection_uri = vz:///system
  
<!--T:11-->
+
* Remove 'cpu_mode' parameter or set the following:
Then reboot your system:
 
  
  <!--T:12-->
+
  cpu_mode = none
$ reboot
 
  
<!--T:13-->
+
* Then restart nova-compute service:
Create user "stack" and add him to sudoers:
 
  
  <!--T:14-->
+
  $ systemctl restart openstack-nova-compute.service
$ adduser stack -d /vz/stack
 
$ chmod 755 /vz/stack
 
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
 
  
<!--T:15-->
+
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'
Download and unpack container image:
 
  
<!--T:16-->
+
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
$ su stack -c "cd ~ && wget http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz"
 
$ su stack -c "cd ~ && tar -xzvf centos7-exe.hds.tar.gz"
 
  
 
<!--T:17-->
 
<!--T:17-->
Clone devstack:
+
If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.
 +
 
 +
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]
 +
* In addition to above instructions change /etc/cinder/cinder.conf:
 +
 
 +
[DEFAULT]
 +
...
 +
enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2
 +
...
 +
 
 +
[vstorage-ploop]
 +
vzstorage_default_volume_format = ploop
 +
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
 +
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
 +
volume_backend_name = vstorage-ploop
 +
 
 +
[vstorage-qcow2]
 +
vzstorage_default_volume_format = qcow2
 +
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
 +
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
 +
volume_backend_name = vstorage-qcow2
 +
 
 +
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:
 +
 
 +
YOUR-CLUSTER-NAME ["-u", "cinder", "-g", "root", "-m", "0770"]
 +
 
 +
* Create two new volume types:
 +
 
 +
$ cinder type-create vstorage-qcow2
 +
$ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2
 +
 
 +
$ cinder type-create vstorage-ploop
 +
$ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop
 +
 
 +
* Create directory for storage logs:
 +
 
 +
$ mkdir /var/log/pstorage
 +
 
 +
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:
 +
 
 +
$ echo $CLUSTER_PASSWD | vstorage auth-node -c YOUR-CLUSTER-NAME -P
 
   
 
   
$ su stack -c "cd ~ && git clone git://git.openstack.org/openstack-dev/devstack"
+
* Then restart cinder services:
  
<!--T:18-->
+
$ systemctl restart openstack-cinder-api
As soon as we switched to a new linux distribution, which is not supported in upstream devstack yet, apply the following patch to devstack:
+
$ systemctl restart openstack-cinder-scheduler
 +
$ systemctl restart openstack-cinder-volume
  
<pre>
+
== How to create a new ploop image ready to upload to Glance == <!--T:17-->
diff --git a/functions-common b/functions-common
 
index 47276f0..c0cdd3b 100644
 
--- a/functions-common
 
+++ b/functions-common
 
@@ -338,8 +338,9 @@ function GetOSVersion {
 
        # XenServer release 6.2.0-70446c (xenenterprise)
 
        # Oracle Linux release 7
 
        # CloudLinux release 7.1
 
+        # VirtuozzoLinux release 7.0
 
        os_CODENAME=""
 
-        for r in "Red Hat" CentOS Fedora XenServer CloudLinux; do
 
+        for r in "Red Hat" CentOS Fedora XenServer CloudLinux VirtuozzoLinux; do
 
            os_VENDOR=$r
 
            if [[ -n "`grep \"$r\" /etc/redhat-release`" ]]; then
 
                ver=`sed -e 's/^.* \([0-9].*\) (\(.*\)).*$/\1\|\2/' /etc/redhat-release`
 
@@ -446,7 +447,8 @@ function is_fedora {
 
  
    [ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || \
+
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal
        [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "OracleLinux" ] || \
 
-       [ "$os_VENDOR" = "CloudLinux" ] || [ "$os_VENDOR" = "kvmibm" ]
 
+        [ "$os_VENDOR" = "CloudLinux" ] || [ "$os_VENDOR" = "kvmibm" ] ||
 
+        [ "$os_VENDOR" = "VirtuozzoLinux" ]
 
}
 
</pre>
 
  
<!--T:19-->
+
$ ct=centos-7
Create local.conf file:
+
 
 +
* Create a new container based on necessary os distribution
 +
 
 +
$ prlctl create glance-$ct --vmtype ct --ostemplate $ct
 +
 
 +
* Set IP address and DNS to be able to connect to internet from the container
 +
 
 +
$ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR
 +
 
 +
* Add additional network adapter
 +
 
 +
$ prlctl set glance-$ct --device-add net --network Bridged --dhcp on
 +
 
 +
* Start the container
 +
 
 +
$ prlctl start glance-$ct
 +
 
 +
* Install cloud-init packet
 +
 
 +
$ prlctl exec glance-$ct yum install cloud-init -y
 +
 
 +
* Stop the container and mount it
 +
 
 +
$ prlctl stop glance-$ct
 +
$ prlctl mount glance-$ct
 +
 
 +
* Store the container uuid
 +
 
 +
$ uuid=$(vzlist glance-$ct | awk ' NR>1 { print $1 }')
 +
 
 +
* Remove the following modules from cloud.cfg
 +
 
 +
$ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/cloud.cfg
 +
$ sed -i '/- resizefs/d' /vz/root/$uuid/etc/cloud/cloud.cfg
 +
 
 +
* Prepare network scripts
  
  <!--T:20-->
+
  cat > /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 << _EOF
$ cat > ~stack/devstack/local.conf << _EOF
+
  DEVICE=eth0
<nowiki>[[local|localrc]]</nowiki>
+
  ONBOOT=yes
FORCE=yes
+
  NM_CONTROLLED=no
HOST_IP=1.1.1.1
+
  BOOTPROTO=dhcp
MYSQL_PASSWORD=password
 
SERVICE_TOKEN=password
 
SERVICE_PASSWORD=password
 
ADMIN_PASSWORD=password
 
LIBVIRT_TYPE=parallels
 
RABBIT_PASSWORD=password
 
#Basic services
 
ENABLED_SERVICES=key,rabbit,mysql,horizon
 
# Enable Nova services
 
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-novnc,n-cauth
 
# Enable Glance services
 
ENABLED_SERVICES+=,g-api,g-reg
 
# Enable Cinder services
 
#ENABLED_SERVICES+=,c-sch,c-api,c-vol
 
# Enable Heat, to test orchestration
 
#ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
 
# Enable Neutron services
 
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron
 
# Destination path for installation
 
DEST=/vz/stack
 
# Destination for working data
 
DATA_DIR=/vz/stack/data
 
# Destination for status files
 
SERVICE_DIR=/vz/stack/status
 
  LOG_COLOR=False
 
  LOGDAYS=3
 
  LOGFILE=/vz/stack/logs/stack.sh.log
 
  SCREEN_LOGDIR=/vz/stack/logs/screen
 
ENABLE_METADATA_NETWORK=True
 
ENABLE_ISOLATED_METADATA=True
 
IMAGE_URLS="file:///vz/stack/centos7-exe.hds"
 
 
  _EOF
 
  _EOF
  
<!--T:21-->
+
* If you need more than one network adapters within a container, make as many copies as you need
Change HOST_IP within created ~stack/devstack/local.conf to a valid IP address of your Virtuozzo 7 host.
+
 
Change password for OpenStack services to whatever you prefer in ~stack/devstack/local.conf.
+
$ cp /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
Make libvirt daemon accessible via socket for stack user:
+
$ sed -i '/eth0/eth1' /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
  
<!--T:22-->
+
* Perform some cleanup
$ sed -i s/"#unix_sock_group = \"libvirt\""/"unix_sock_group = \"stack\""/ /etc/libvirt/libvirtd.conf
 
$ sed -i s/"#unix_sock_ro_perms = \"0777\""/"unix_sock_ro_perms = \"0777\""/ /etc/libvirt/libvirtd.conf
 
$ sed -i s/"#unix_sock_rw_perms = \"0770\""/"unix_sock_rw_perms = \"0770\""/ /etc/libvirt/libvirtd.conf
 
$ sed -i s/"#unix_sock_dir = \"\/var\/run\/libvirt\""/"unix_sock_dir = \"\/var\/run\/libvirt\""/ /etc/libvirt/libvirtd.conf
 
$ sed -i s/"#auth_unix_ro = \"none\""/"auth_unix_ro = \"none\""/ /etc/libvirt/libvirtd.conf
 
$ sed -i s/"#auth_unix_rw = \"none\""/"auth_unix_rw = \"none\""/ /etc/libvirt/libvirtd.conf
 
  
<!--T:23-->
+
$ rm -f /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0*
Define the following function:
+
  $ rm -f /vz/root/$uuid/etc/resolv.conf
  function apply_cherry_pick {
 
        local git_remote=$1
 
        local dest_dir=$2
 
        local cherry_pick_refs=$3
 
        pushd .
 
        cd $dest_dir
 
        # modify current source
 
        for ref in ${cherry_pick_refs//,/ }; do
 
                echo "Applying $ref from $git_remote ..."
 
                git fetch $git_remote $ref
 
                git cherry-pick FETCH_HEAD
 
                echo "Applying $ref from $git_remote ... done"
 
        done
 
        popd
 
}
 
  
<!--T:24-->
+
* Create ploop disk and copy files
Clone nova repository and apply pending changes:
+
 
  $ su stack
+
  $ mkdir /tmp/ploop-$ct
$ cd ~
+
  $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds
  $ source ~stack/devstack/functions
+
  $ mkdir /tmp/ploop-$ct/dst
  $ NOVA_CHERRY_PICK_REFS=refs/changes/57/182257/36,refs/changes/79/217679/12,refs/changes/36/260636/4,refs/changes/14/214314/3
+
$ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml
$ git_clone https://github.com/openstack/nova.git ~stack/nova master
+
$ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/
  $ apply_cherry_pick https://review.openstack.org/openstack/nova ~stack/nova $NOVA_CHERRY_PICK_REFS
+
  $ ploop umount -m /tmp/ploop-$ct/dst/
  
<!--T:25-->
+
* Unmount the container
Start devstack:
 
  
  <!--T:26-->
+
  $ prlctl umount glance-$ct
$ su stack
 
$ ~/devstack/stack.sh
 
  
<!--T:27-->
+
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance
After script finishes, setup your iptables rules to allow incoming http trafic if you want to use OpenStack dashboard. For instance:
 
  
<!--T:28-->
+
== See also == <!--T:100-->
$ iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
+
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]
$ iptables -A INPUT -p tcp --dport http -j ACCEPT
+
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]
$ iptables -A INPUT -j REJECT --reject-with icmp-host-prohibited
+
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]
 +
* [https://docs.openvz.org/ Virtuozzo Documentation]
 +
* [[Virtuozzo ecosystem]]
  
<!--T:39-->
 
Here you are!
 
 
</translate>
 
</translate>
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Latest revision as of 14:10, 8 June 2017

<translate> This article describes how to install OpenStack on Virtuozzo 7.

Introduction[edit]

Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration.

This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.

You need the following infrastructure to setup OpenStack with Virtuozzo 7:

  1. controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.
  2. compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.

Prerequisites[edit]

You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts

$ yum update -y

If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0. You can check you configuration with the following command:

$ if=$(brctl show | grep '^br0' | awk ' { print $4 }') && addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') && gw=$(ip route | grep default | awk ' { print $3 } ') && echo "My interface is '$if', gateway is '$gw', IP address '$addr'"

For instance you have the following output after execution the above script:

My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.

Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE="br0" string from it:

...
ONBOOT=yes
IPADDR=192.168.190.134
GATEWAY=192.168.190.2
PREFIX=24
...

Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.

$ rm /etc/sysconfig/network-scripts/ifcfg-br0

Then restart network service:

$ systemctl restart network

Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*)[edit]

You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.

Git must be installed on all your Virtuozzo nodes:

$ yum install git -y

Clone virtuozzo scripts:

$ cd /vz
$ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts
$ cd virtuozzo-openstack-scripts

If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster.

Setup Virtuozzo Storage client:

$ yum install vstorage-client -y

Check cluster discovery is working fine first:

$ vstorage discover

Output will show the discovered clusters. Now you need to authenticate controller node on the Virtuozzo Storage cluster:

$ vstorage -c $CLUSTER_NAME auth-node

Enter the virtuozzo storage cluster password and press Enter. Check the cluster properties:

$ vstorage -c $CLUSTER_NAME top

Output will show Virtuozzo storage cluster properties and state.

Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md

Example:

$ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool "start=10.24.41.151,end=10.24.41.199" --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL 

Run the script on your CONTROLLER node and follow instructions (if any):

$ ./setup_devstack_vz7.sh

Installation can take up to 30 minutes depending on your Internet connection speed. Finished!

Setup OpenStack Compute Node (*Developer/POC Setup*)[edit]

Git must be installed on all your Virtuozzo nodes:

$ yum install git -y

Clone Virtuozzo scripts to your COMPUTE node:

$ cd /vz
$ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts
$ cd /vz/virtuozzo-openstack-scripts

If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. Setup Virtuozzo Storage client:

$ yum install vstorage-client -y

Check cluster discovery is working fine first:

$ vstorage discover

Output will show the discovered clusters. Now you need to authenticate controller node on the Virtuozzo Storage cluster:

$ vstorage -c $CLUSTER_NAME auth-node

Enter the virtuozzo storage cluster password and press Enter. Check the cluster properties:

$ vstorage -c $CLUSTER_NAME top

Output will show the virtuozzo storage cluster properties and state.

Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md

Example:

$ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 

Run the script on your COMPUTE node and follow instructions (if any):

$ ./setup_devstack_vz7.sh

How to change Virtualization Type to Virtual Machines on the Compute Node[edit]

If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.

Open nova configuration file:

$ vi /etc/nova/nova.conf

Change the following lines:

[libvirt]
...
virt_type = parallels
images_type = qcow2
connection_uri = vz:///system

Delete the line:

inject_partition = -2

Save the file.

Restart nova-compute service:

$ su stack
$ screen -r

Press Ctrl-c

$ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' & echo $! >/vz/stack/status/stack/n-cpu.pid; fg || echo "n-cpu failed to start" | tee "/vz/stack/status/stack/n-cpu.failure"

To exit from screen session: Press Ctrl+a+d

How to redeploy OpenStack on the same nodes[edit]

Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:

  1. cd /vz/virtuozzo-openstack-scripts
  2. git pull
  3. Run ./setup_devstack_vz7.sh with options you need.

Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*)[edit]

  • Install Virtuozzo Platform Release package to all Virtuozzo OpenStack nodes:
$ yum install vz-platform-release
  • Install packstack package:
$ yum install openstack-packstack
  • Download sample Vz7 packstack answer file:
$ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-ocata.txt
  • Edit vz7-packstack-ocata.txt enabling/disabling necessary services
  • Replace all references to 'localhost' and '127.0.0.1' host addresses to correct valuses
  • Set all passwords parameters containing PW_PLACEHOLDER string to some meaninful values
  • If you are going to use Virtuozzo Storage as a Cinder Volume backend set the following parameters:
 # Enable Virtuozzo Storage
 CONFIG_VSTORAGE_ENABLED=y
 # VStorage cluster name.
 CONFIG_VSTORAGE_CLUSTER_NAME=
 # VStorage cluster password.
 CONFIG_VSTORAGE_CLUSTER_PASSWORD= 
 # Bridge mappings
 CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet1:br-ex
 # Bridge interfaces
 CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0
 # Bridge mapping for compute node
 CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=extnet1:br-ex
  • Then run packstack:
$ packstack --answer-file=vz7-packstack-ocata.txt

Install and configure a nova controller node on Virtuozzo 7 (*Production Setup*)[edit]

  • Download the container image
  • Unpack it
$ tar -xzvf centos7-exe.hds.tar.gz
  • Upload the image to glance:

NOTE: this image was created for testing purposes only. Don't use it in production as is!

$ glance image-create --name centos7-exe --disk-format ploop --min-ram 512 --min-disk 1 --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds
$ glance image-create --name centos7-hvm --disk-format qcow2 --min-ram 1024 --min-disk 10 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2
  • CentOS image one can get here [1]

Install and configure a compute node on Virtuozzo 7 (*Production Setup*)[edit]

Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.

  • Follow instructions on OpenStack.org
  • In addition to above instructions change /etc/nova/nova.conf:
[DEFAULT]
...
vnc_keymap =
force_raw_images = False
pointer_model = ps2mouse
[libvirt]
...
vzstorage_mount_user = nova
vzstorage_mount_group = root
virt_type = parallels
images_type = ploop
connection_uri = vz:///system
  • Remove 'cpu_mode' parameter or set the following:
cpu_mode = none
  • Then restart nova-compute service:
$ systemctl restart openstack-nova-compute.service
  • If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'

Install and configure a block storage node on Virtuozzo 7 (*Production Setup*)[edit]

If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.

  • Follow instructions on OpenStack.org
  • In addition to above instructions change /etc/cinder/cinder.conf:
[DEFAULT]
...
enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2
...
[vstorage-ploop]
vzstorage_default_volume_format = ploop
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
volume_backend_name = vstorage-ploop
[vstorage-qcow2]
vzstorage_default_volume_format = qcow2
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
volume_backend_name = vstorage-qcow2
  • Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:
YOUR-CLUSTER-NAME ["-u", "cinder", "-g", "root", "-m", "0770"]
  • Create two new volume types:
$ cinder type-create vstorage-qcow2
$ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2
$ cinder type-create vstorage-ploop
$ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop
  • Create directory for storage logs:
$ mkdir /var/log/pstorage
  • Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:
$ echo $CLUSTER_PASSWD | vstorage auth-node -c YOUR-CLUSTER-NAME -P

  • Then restart cinder services:
$ systemctl restart openstack-cinder-api
$ systemctl restart openstack-cinder-scheduler
$ systemctl restart openstack-cinder-volume

How to create a new ploop image ready to upload to Glance[edit]

  • Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal
$ ct=centos-7
  • Create a new container based on necessary os distribution
$ prlctl create glance-$ct --vmtype ct --ostemplate $ct
  • Set IP address and DNS to be able to connect to internet from the container
$ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR
  • Add additional network adapter
$ prlctl set glance-$ct --device-add net --network Bridged --dhcp on
  • Start the container
$ prlctl start glance-$ct
  • Install cloud-init packet
$ prlctl exec glance-$ct yum install cloud-init -y
  • Stop the container and mount it
$ prlctl stop glance-$ct
$ prlctl mount glance-$ct
  • Store the container uuid
$ uuid=$(vzlist glance-$ct | awk ' NR>1 { print $1 }') 
  • Remove the following modules from cloud.cfg
$ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/cloud.cfg
$ sed -i '/- resizefs/d' /vz/root/$uuid/etc/cloud/cloud.cfg
  • Prepare network scripts
cat > /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 << _EOF
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=dhcp
_EOF
  • If you need more than one network adapters within a container, make as many copies as you need
$ cp /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
$ sed -i '/eth0/eth1' /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
  • Perform some cleanup
$ rm -f /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0*
$ rm -f /vz/root/$uuid/etc/resolv.conf
  • Create ploop disk and copy files
$ mkdir /tmp/ploop-$ct
$ ploop init -s 950M /tmp/ploop-$ct/$ct.hds
$ mkdir /tmp/ploop-$ct/dst
$ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml
$ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/
$ ploop umount -m /tmp/ploop-$ct/dst/
  • Unmount the container
$ prlctl umount glance-$ct
  • Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance

See also[edit]

</translate>