Changes

Jump to: navigation, search

Setup OpenStack with Virtuozzo 7

2,583 bytes added, 14:10, 8 June 2017
Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*)
$ yum update -y
Git must be installed on all your Virtuozzo nodes: $ yum install git -y Most probably, If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.
You can check you configuration with the following command:
<!--T:3-->
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.
 
Git must be installed on all your Virtuozzo nodes:
$ yum install git -y
Clone virtuozzo scripts:
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==
 
Git must be installed on all your Virtuozzo nodes:
$ yum install git -y
Clone Virtuozzo scripts to your COMPUTE node:
Output will show the virtuozzo storage cluster properties and state.
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblobscripts/blob/master/README.md
Example:
virt_type = parallels
images_type = qcow2
connection_uri = parallelsvz:///system
Delete the line:
* Create a new repo fileInstall Virtuozzo Platform Release package to all Virtuozzo OpenStack nodes:
cat > /etc/yum.repos.d/virtuozzo-extra.repo << _EOF [virtuozzo-extra] name=Virtuozzo Extra baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/ enabled=1 gpgcheck=1 priority=50 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7 _EOF * Add RDO repository: $ yum install https://rdoproject.org/repos/rdovz-platform-release.rpm
* Install packstack package:
* Download sample Vz7 packstack answer file:
$ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sampleocata.txt * Edit vz7-packstack-ocata.txt enabling/disabling necessary services* Replace all references to 'localhost' and '127.0.0.1' host addresses to correct valuses* Set all passwords parameters containing PW_PLACEHOLDER string to some meaninful values* If you are going to use Virtuozzo Storage as a Cinder Volume backend set the following parameters:  # Enable Virtuozzo Storage CONFIG_VSTORAGE_ENABLED=y  # VStorage cluster name. CONFIG_VSTORAGE_CLUSTER_NAME=  # VStorage cluster password. CONFIG_VSTORAGE_CLUSTER_PASSWORD=   # Bridge mappings CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet1:br-ex
* Edit vz7 # Bridge interfaces CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the fileex:eth0
CONFIG_CONTROLLER_HOST # Bridge mapping for compute node CONFIG_COMPUTE_HOSTS CONFIG_NETWORK_HOSTS CONFIG_AMQP_HOST CONFIG_MARIADB_HOST CONFIG_REDIS_HOST CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=extnet1:br-ex
* Change CONFIG_DEFAULT_PASSWORD parameter!!!
* Then run packstack:
$ packstack --answer-file =vz7-packstack-sampleocata.txt
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:18-->
<!--T:19-->
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:
disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop
 
* Restart glance-api service:
 
systemctl restart openstack-glance-api.service
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]
NOTE: this image was created for testing purposes only. Don't use it in production as is!
$ glance image-create --name centos7-exe --disk-format ploop --min-ram 512 --min-disk 1 --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds
$ glance image-create --name centos7-hvm --disk-format qcow2 --min-ram 1024 --min-disk 10 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2
* CenOS CentOS image one can get from here [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == <!--T:16-->
<!--T:17-->
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.
 
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]
* In addition to above instructions change /etc/nova/nova.conf:
vnc_keymap =
force_raw_images = False
pointer_model = ps2mouse
[libvirt]
...
vzstorage_mount_user = nova
vzstorage_mount_group = root
virt_type = parallels
images_type = ploop
connection_uri = parallelsvz:///system
* Remove 'cpu_mode' parameter or set the following:
cpu_mode=none
* Then restart nova-compute service:
<!--T:17-->
If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.
 
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]
* In addition to above instructions change /etc/cinder/cinder.conf:
[vstorage-ploop]
vzstorage_default_volume_format = parallelsploop
vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:
$ echo $CLUSTER_PASSWD | vstorage auth-node -c cc YOUR-CLUSTER-NAME -P
* Then restart cinder services:
$ systemctl restart openstack-cinder-scheduler
$ systemctl restart openstack-cinder-volume
 
== How to create a new ploop image ready to upload to Glance == <!--T:17-->
 
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal
 
$ ct=centos-7
 
* Create a new container based on necessary os distribution
 
$ prlctl create glance-$ct --vmtype ct --ostemplate $ct
 
* Set IP address and DNS to be able to connect to internet from the container
 
$ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR
 
* Add additional network adapter
 
$ prlctl set glance-$ct --device-add net --network Bridged --dhcp on
 
* Start the container
 
$ prlctl start glance-$ct
 
* Install cloud-init packet
 
$ prlctl exec glance-$ct yum install cloud-init -y
 
* Stop the container and mount it
 
$ prlctl stop glance-$ct
$ prlctl mount glance-$ct
 
* Store the container uuid
 
$ uuid=$(vzlist glance-$ct | awk ' NR>1 { print $1 }')
 
* Remove the following modules from cloud.cfg
 
$ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/cloud.cfg
$ sed -i '/- resizefs/d' /vz/root/$uuid/etc/cloud/cloud.cfg
 
* Prepare network scripts
 
cat > /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 << _EOF
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=dhcp
_EOF
 
* If you need more than one network adapters within a container, make as many copies as you need
 
$ cp /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
$ sed -i '/eth0/eth1' /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1
 
* Perform some cleanup
 
$ rm -f /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0*
$ rm -f /vz/root/$uuid/etc/resolv.conf
 
* Create ploop disk and copy files
 
$ mkdir /tmp/ploop-$ct
$ ploop init -s 950M /tmp/ploop-$ct/$ct.hds
$ mkdir /tmp/ploop-$ct/dst
$ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml
$ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/
$ ploop umount -m /tmp/ploop-$ct/dst/
 
* Unmount the container
 
$ prlctl umount glance-$ct
 
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance
== See also == <!--T:100-->
74
edits

Navigation menu