<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Vporokhov</id>
	<title>OpenVZ Virtuozzo Containers Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Vporokhov"/>
	<link rel="alternate" type="text/html" href="https://wiki.openvz.org/Special:Contributions/Vporokhov"/>
	<updated>2026-05-14T14:58:32Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23029</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23029"/>
		<updated>2018-09-19T13:10:48Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Feature comparison of different virtualization solutions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Software-defined Storage'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23028</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23028"/>
		<updated>2018-09-19T13:05:21Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Software-defined Storage'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23027</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23027"/>
		<updated>2018-09-19T13:04:22Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Software-defined Storage'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23026</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23026"/>
		<updated>2018-09-19T12:59:15Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Feature comparison of different virtualization solutions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Software-defined Storage'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23025</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=23025"/>
		<updated>2018-09-19T12:57:14Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Feature comparison of different virtualization solutions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots, new version is not finished yet)&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Software-defined Storage'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22658</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22658"/>
		<updated>2017-06-16T12:05:25Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Feature comparison of different virtualization solutions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22610</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22610"/>
		<updated>2017-04-21T11:50:01Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Install Virtuozzo Platform Release package to all Virtuozzo OpenStack nodes:&lt;br /&gt;
&lt;br /&gt;
 $ yum install vz-platform-release&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-ocata.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-ocata.txt enabling/disabling necessary services&lt;br /&gt;
* Replace all references to 'localhost' and '127.0.0.1' host addresses to correct valuses&lt;br /&gt;
* Set all passwords parameters containing PW_PLACEHOLDER string to some meaninful values&lt;br /&gt;
* If you are going to use Virtuozzo Storage as a Cinder Volume backend set the following parameters:&lt;br /&gt;
&lt;br /&gt;
  # Enable Virtuozzo Storage&lt;br /&gt;
  CONFIG_VSTORAGE_ENABLED=y&lt;br /&gt;
&lt;br /&gt;
  # VStorage cluster name.&lt;br /&gt;
  CONFIG_VSTORAGE_CLUSTER_NAME=&lt;br /&gt;
&lt;br /&gt;
  # VStorage cluster password.&lt;br /&gt;
  CONFIG_VSTORAGE_CLUSTER_PASSWORD= &lt;br /&gt;
&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file=vz7-packstack-ocata.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --min-ram 512 --min-disk 1 --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --min-ram 1024 --min-disk 10 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get here [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 ]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_user = nova&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode = none&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c YOUR-CLUSTER-NAME -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Stop the container and mount it&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Store the container uuid&lt;br /&gt;
&lt;br /&gt;
 $ uuid=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }') &lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/cloud.cfg&lt;br /&gt;
 $ sed -i '/- resizefs/d' /vz/root/$uuid/etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters within a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ cp /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ sed -i '/eth0/eth1' /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /vz/root/$uuid/etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Unmount the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22609</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22609"/>
		<updated>2017-04-21T11:40:04Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Installing OpenStack with help of packstack on Virtuozzo 7 (*Production Setup*) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Install Virtuozzo Platform Release package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install vz-platform-release&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-ocata.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-ocata.txt enabling/disabling necessary services&lt;br /&gt;
* Replace all references to 'localhost' and '127.0.0.1' host addresses to correct valuses&lt;br /&gt;
* Set all passwords parameters containing PW_PLACEHOLDER string to some meaninful values&lt;br /&gt;
* If you are going to use Virtuozzo Storage as a Cinder Volume backend set the following parameters:&lt;br /&gt;
&lt;br /&gt;
  # Enable Virtuozzo Storage&lt;br /&gt;
  CONFIG_VSTORAGE_ENABLED=y&lt;br /&gt;
&lt;br /&gt;
  # VStorage cluster name.&lt;br /&gt;
  CONFIG_VSTORAGE_CLUSTER_NAME=&lt;br /&gt;
&lt;br /&gt;
  # VStorage cluster password.&lt;br /&gt;
  CONFIG_VSTORAGE_CLUSTER_PASSWORD= &lt;br /&gt;
&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file=vz7-packstack-ocata.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --min-ram 512 --min-disk 1 --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --min-ram 1024 --min-disk 10 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get here [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 ]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_user = nova&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode = none&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c YOUR-CLUSTER-NAME -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Stop the container and mount it&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Store the container uuid&lt;br /&gt;
&lt;br /&gt;
 $ uuid=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }') &lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ sed -i '/- growpart/d' /vz/root/$uuid/etc/cloud/cloud.cfg&lt;br /&gt;
 $ sed -i '/- resizefs/d' /vz/root/$uuid/etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters within a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ cp /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth0 /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ sed -i '/eth0/eth1' /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /vz/root/$uuid/etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /vz/root/$uuid/etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$uuid/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Unmount the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22571</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22571"/>
		<updated>2017-03-18T09:22:15Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22570</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22570"/>
		<updated>2017-03-18T09:20:43Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: /* Feature comparison of different virtualization solutions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management and different policies&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22554</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22554"/>
		<updated>2017-03-01T15:01:15Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Create a new repo file:&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/yum.repos.d/virtuozzo-extra.repo &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 [virtuozzo-extra]&lt;br /&gt;
 name=Virtuozzo Extra&lt;br /&gt;
 baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/&lt;br /&gt;
 enabled=1&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 priority=50&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* Add RDO repository:&lt;br /&gt;
 &lt;br /&gt;
 $ yum install https://rdoproject.org/repos/rdo-release.rpm&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CONTROLLER_HOST&lt;br /&gt;
 CONFIG_COMPUTE_HOSTS&lt;br /&gt;
 CONFIG_NETWORK_HOSTS&lt;br /&gt;
 CONFIG_AMQP_HOST&lt;br /&gt;
 CONFIG_MARIADB_HOST&lt;br /&gt;
 CONFIG_REDIS_HOST&lt;br /&gt;
&lt;br /&gt;
* Change CONFIG_DEFAULT_PASSWORD parameter!!!&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get from [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode=none&lt;br /&gt;
  &lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
If you are going to run containers AND virtual machines simultaneously on your compute node you have to use this approach.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters withing a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Stop the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
 $ id=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }')&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22553</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22553"/>
		<updated>2017-03-01T14:59:27Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Create a new repo file:&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/yum.repos.d/virtuozzo-extra.repo &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 [virtuozzo-extra]&lt;br /&gt;
 name=Virtuozzo Extra&lt;br /&gt;
 baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/&lt;br /&gt;
 enabled=1&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 priority=50&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* Add RDO repository:&lt;br /&gt;
 &lt;br /&gt;
 $ yum install https://rdoproject.org/repos/rdo-release.rpm&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CONTROLLER_HOST&lt;br /&gt;
 CONFIG_COMPUTE_HOSTS&lt;br /&gt;
 CONFIG_NETWORK_HOSTS&lt;br /&gt;
 CONFIG_AMQP_HOST&lt;br /&gt;
 CONFIG_MARIADB_HOST&lt;br /&gt;
 CONFIG_REDIS_HOST&lt;br /&gt;
&lt;br /&gt;
* Change CONFIG_DEFAULT_PASSWORD parameter!!!&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get from [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode=none&lt;br /&gt;
  &lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters withing a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Stop the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
 $ id=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }')&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22552</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22552"/>
		<updated>2017-03-01T14:59:08Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Create a new repo file:&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/yum.repos.d/virtuozzo-extra.repo &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 [virtuozzo-extra]&lt;br /&gt;
 name=Virtuozzo Extra&lt;br /&gt;
 baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/&lt;br /&gt;
 enabled=1&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 priority=50&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* Add RDO repository:&lt;br /&gt;
 &lt;br /&gt;
 $ yum install https://rdoproject.org/repos/rdo-release.rpm&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CONTROLLER_HOST&lt;br /&gt;
 CONFIG_COMPUTE_HOSTS&lt;br /&gt;
 CONFIG_NETWORK_HOSTS&lt;br /&gt;
 CONFIG_AMQP_HOST&lt;br /&gt;
 CONFIG_MARIADB_HOST&lt;br /&gt;
 CONFIG_REDIS_HOST&lt;br /&gt;
&lt;br /&gt;
* Change CONFIG_DEFAULT_PASSWORD parameter!!!&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get from [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode=none&lt;br /&gt;
  &lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters withing a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Stop the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
 $ id=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }')&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22551</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22551"/>
		<updated>2017-03-01T14:58:23Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Create a new repo file:&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/yum.repos.d/virtuozzo-extra.repo &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 [virtuozzo-extra]&lt;br /&gt;
 name=Virtuozzo Extra&lt;br /&gt;
 baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/&lt;br /&gt;
 enabled=1&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 priority=50&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* Add RDO repository:&lt;br /&gt;
 &lt;br /&gt;
 $ yum install https://rdoproject.org/repos/rdo-release.rpm&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CONTROLLER_HOST&lt;br /&gt;
 CONFIG_COMPUTE_HOSTS&lt;br /&gt;
 CONFIG_NETWORK_HOSTS&lt;br /&gt;
 CONFIG_AMQP_HOST&lt;br /&gt;
 CONFIG_MARIADB_HOST&lt;br /&gt;
 CONFIG_REDIS_HOST&lt;br /&gt;
&lt;br /&gt;
* Change CONFIG_DEFAULT_PASSWORD parameter!!!&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
Please use this chapter if you are going to run containers OR virtual machines on your compute node, but not containers AND virtual machines simultaneously. If you need to run containers and VMs simultaneously, please use next chapter.&lt;br /&gt;
&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get from [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode=none&lt;br /&gt;
  &lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters withing a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Stop the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
 $ id=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }')&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22550</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22550"/>
		<updated>2017-03-01T14:39:58Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Create a new repo file:&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/yum.repos.d/virtuozzo-extra.repo &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 [virtuozzo-extra]&lt;br /&gt;
 name=Virtuozzo Extra&lt;br /&gt;
 baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/&lt;br /&gt;
 enabled=1&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 priority=50&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* Add RDO repository:&lt;br /&gt;
 &lt;br /&gt;
 $ yum install https://rdoproject.org/repos/rdo-release.rpm&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CONTROLLER_HOST&lt;br /&gt;
 CONFIG_COMPUTE_HOSTS&lt;br /&gt;
 CONFIG_NETWORK_HOSTS&lt;br /&gt;
 CONFIG_AMQP_HOST&lt;br /&gt;
 CONFIG_MARIADB_HOST&lt;br /&gt;
 CONFIG_REDIS_HOST&lt;br /&gt;
&lt;br /&gt;
* Change CONFIG_DEFAULT_PASSWORD parameter!!!&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get from [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = vz:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode=none&lt;br /&gt;
  &lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters withing a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Stop the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
 $ id=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }')&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22549</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=22549"/>
		<updated>2017-03-01T14:36:48Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
&lt;br /&gt;
This guide describes two ways of installing OpenStack on Virtuozzo nodes. The first is for quick/development/POC needs. The second is for production. Please keep in mind that devstack allows you to install OpenStack for demo/POC/development purposes only. That means it will be reset after host reboot.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
If you have br0 bridge configured as an IP interface, then you should move an IP address assigned to it to the physical ethernet interface bridged to br0.&lt;br /&gt;
You can check you configuration with the following command:&lt;br /&gt;
&lt;br /&gt;
 $ if=$(brctl show | grep '^br0' | awk ' { print $4 }') &amp;amp;&amp;amp; addr=$(ip addr | grep -w 'br0' | grep inet | awk ' {print $2} ') &amp;amp;&amp;amp; gw=$(ip route | grep default | awk ' { print $3 } ') &amp;amp;&amp;amp; echo &amp;quot;My interface is '$if', gateway is '$gw', IP address '$addr'&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For instance you have the following output after execution the above script:&lt;br /&gt;
&lt;br /&gt;
 My interface is 'en33', gateway is '192.168.190.2', IP address '192.168.190.134/24'.&lt;br /&gt;
&lt;br /&gt;
Then edit your /etc/sysconfig/network-scripts/ifcfg-ens33 to have the following content and remove BRIDGE=&amp;quot;br0&amp;quot; string from it:&lt;br /&gt;
 ...&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 IPADDR=192.168.190.134&lt;br /&gt;
 GATEWAY=192.168.190.2&lt;br /&gt;
 PREFIX=24&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Remove /etc/sysconfig/network-scripts/ifcfg-br0 file.&lt;br /&gt;
&lt;br /&gt;
 $ rm /etc/sysconfig/network-scripts/ifcfg-br0&lt;br /&gt;
 &lt;br /&gt;
Then restart network service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart network&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support (*Developer/POC Setup*) == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.0.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node (*Developer/POC Setup*) ==&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualization Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = parallels:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Installing OpenStack with help of packstack on [[Virtuozzo]] 7 (*Production Setup*) == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Create a new repo file:&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/yum.repos.d/virtuozzo-extra.repo &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 [virtuozzo-extra]&lt;br /&gt;
 name=Virtuozzo Extra&lt;br /&gt;
 baseurl=http://repo.virtuozzo.com/openstack/newton/x86_64/os/&lt;br /&gt;
 enabled=1&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 priority=50&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Virtuozzo-7&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* Add RDO repository:&lt;br /&gt;
 &lt;br /&gt;
 $ yum install https://rdoproject.org/repos/rdo-release.rpm&lt;br /&gt;
&lt;br /&gt;
* Install packstack package:&lt;br /&gt;
&lt;br /&gt;
 $ yum install openstack-packstack&lt;br /&gt;
&lt;br /&gt;
* Download sample Vz7 packstack answer file:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://raw.githubusercontent.com/virtuozzo/virtuozzo-openstack-scripts/master/vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
* Edit vz7-packstack-sample.txt enabling/disabling necessary services. Also make sure you have correct IP addresses specified by the following parameters in the file:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CONTROLLER_HOST&lt;br /&gt;
 CONFIG_COMPUTE_HOSTS&lt;br /&gt;
 CONFIG_NETWORK_HOSTS&lt;br /&gt;
 CONFIG_AMQP_HOST&lt;br /&gt;
 CONFIG_MARIADB_HOST&lt;br /&gt;
 CONFIG_REDIS_HOST&lt;br /&gt;
&lt;br /&gt;
* Change CONFIG_DEFAULT_PASSWORD parameter!!!&lt;br /&gt;
* Then run packstack:&lt;br /&gt;
&lt;br /&gt;
 $ packstack --answer-file vz7-packstack-sample.txt&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --property hypervisor_type=vz --property cinder_img_volume_type=vstorage-ploop --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
 $ glance image-create --name centos7-hvm --disk-format qcow2 --container-format bare --property cinder_img_volume_type=vstorage-qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2&lt;br /&gt;
&lt;br /&gt;
* CentOS image one can get from [http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 here]&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
 pointer_model = ps2mouse&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 vzstorage_mount_group = root&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels:///system&lt;br /&gt;
&lt;br /&gt;
* Remove 'cpu_mode' parameter or set the following:&lt;br /&gt;
&lt;br /&gt;
 cpu_mode=none&lt;br /&gt;
  &lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
* If you plan to run Virtual Machines on your Compute node, change 'images_type' parameter to 'qcow2'&lt;br /&gt;
&lt;br /&gt;
== Install and configure a block storage node on [[Virtuozzo]] 7 (*Production Setup*) == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/mitaka/install-guide-rdo/cinder-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/cinder/cinder.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 enabled_backends = lvmdriver-1,vstorage-ploop,vstorage-qcow2&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
 [vstorage-ploop]&lt;br /&gt;
 vzstorage_default_volume_format = ploop&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
 [vstorage-qcow2]&lt;br /&gt;
 vzstorage_default_volume_format = qcow2&lt;br /&gt;
 vzstorage_shares_config = /etc/cinder/vzstorage-shares-vstorage.conf&lt;br /&gt;
 volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver&lt;br /&gt;
 volume_backend_name = vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
* Create /etc/cinder/vzstorage-shares-vstorage.conf with the following content:&lt;br /&gt;
&lt;br /&gt;
 YOUR-CLUSTER-NAME [&amp;quot;-u&amp;quot;, &amp;quot;cinder&amp;quot;, &amp;quot;-g&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;-m&amp;quot;, &amp;quot;0770&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
* Create two new volume types:&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-qcow2&lt;br /&gt;
 $ cinder type-key vstorage-qcow2 set volume_backend_name=vstorage-qcow2&lt;br /&gt;
&lt;br /&gt;
 $ cinder type-create vstorage-ploop&lt;br /&gt;
 $ cinder type-key vstorage-ploop set volume_backend_name=vstorage-ploop&lt;br /&gt;
&lt;br /&gt;
* Create directory for storage logs:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /var/log/pstorage&lt;br /&gt;
&lt;br /&gt;
* Authenticate your Virtuozzo Storage client nodes in oreser to allow them to mount cluster:&lt;br /&gt;
&lt;br /&gt;
 $ echo $CLUSTER_PASSWD | vstorage auth-node -c cc -P&lt;br /&gt;
 &lt;br /&gt;
* Then restart cinder services:&lt;br /&gt;
&lt;br /&gt;
 $ systemctl restart openstack-cinder-api&lt;br /&gt;
 $ systemctl restart openstack-cinder-scheduler&lt;br /&gt;
 $ systemctl restart openstack-cinder-volume&lt;br /&gt;
&lt;br /&gt;
== How to create a new image ploop image ready to upload to Glance == &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Select os template. The following templates are possible: vzlinux-7, centos-7, ubuntu-16.04, ubuntu-14.04, debian-8.0, centos-6, debian-8.0-x86_64-minimal&lt;br /&gt;
&lt;br /&gt;
 $ ct=centos-7&lt;br /&gt;
&lt;br /&gt;
* Create a new container based on necessary os distribution&lt;br /&gt;
&lt;br /&gt;
 $ prlctl create glance-$ct --vmtype ct --ostemplate $ct&lt;br /&gt;
&lt;br /&gt;
* Set IP address and DNS to be able to connect to internet from the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --ipadd IPADDR --nameserver DNS_IPADDR&lt;br /&gt;
&lt;br /&gt;
* Add additional network adapter&lt;br /&gt;
&lt;br /&gt;
 $ prlctl set glance-$ct --device-add net --network Bridged --dhcp on&lt;br /&gt;
&lt;br /&gt;
* Start the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl start glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Install cloud-init packet&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct yum install cloud-init -y&lt;br /&gt;
&lt;br /&gt;
* Remove the following modules from cloud.cfg&lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- growpart/d' /etc/cloud/cloud.cfg&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/- resizefs/d' /etc/cloud/cloud.cfg&lt;br /&gt;
&lt;br /&gt;
* Prepare network scripts&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt; /etc/sysconfig/network-scripts/ifcfg-eth0 &amp;lt;&amp;lt; _EOF&lt;br /&gt;
 DEVICE=eth0&lt;br /&gt;
 ONBOOT=yes&lt;br /&gt;
 NM_CONTROLLED=no&lt;br /&gt;
 BOOTPROTO=dhcp&lt;br /&gt;
 _EOF&lt;br /&gt;
&lt;br /&gt;
* If you need more than one network adapters withing a container, make as many copies as you need &lt;br /&gt;
&lt;br /&gt;
 $ prlctl exec glance-$ct cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
 $ prlctl exec glance-$ct sed -i '/eth0/eth1' /etc/sysconfig/network-scripts/ifcfg-eth1&lt;br /&gt;
&lt;br /&gt;
* Perform some cleanup&lt;br /&gt;
&lt;br /&gt;
 $ rm -f /etc/sysconfig/network-scripts/ifcfg-venet0*&lt;br /&gt;
 $ rm -f /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
* Stop the container&lt;br /&gt;
&lt;br /&gt;
 $ prlctl stop glance-$ct&lt;br /&gt;
&lt;br /&gt;
* Create ploop disk and copy files&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct&lt;br /&gt;
 $ ploop init -s 950M /tmp/ploop-$ct/$ct.hds&lt;br /&gt;
 $ mkdir /tmp/ploop-$ct/dst&lt;br /&gt;
 $ ploop mount -m /tmp/ploop-$ct/dst /tmp/ploop-$ct/DiskDescriptor.xml&lt;br /&gt;
 $ prlctl mount glance-$ct&lt;br /&gt;
 $ id=$(vzlist glance-$ct | awk ' NR&amp;gt;1 { print $1 }')&lt;br /&gt;
 $ cp -Pr --preserve=all /vz/root/$id/* /tmp/ploop-$ct/dst/&lt;br /&gt;
 $ prlctl umount glance-$ct&lt;br /&gt;
 $ ploop umount -m /tmp/ploop-$ct/dst/&lt;br /&gt;
&lt;br /&gt;
* Now the image tmp/ploop-$ct/$ct.hds is ready to be uploaded to Glance&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-controller-install.html Controller Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/nova-compute-install.html Compute Node Installation Guide]&lt;br /&gt;
* [http://docs.openstack.org/newton/install-guide-rdo/environment-packages.html OpenStack Installation Guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo Documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22467</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22467"/>
		<updated>2016-12-01T11:55:06Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22466</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22466"/>
		<updated>2016-12-01T11:51:19Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22465</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22465"/>
		<updated>2016-12-01T11:49:08Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|N/A&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated GUI'''&lt;br /&gt;
|Centralized multi-server management&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|Qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}}, depends on underlying storage driver&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22464</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22464"/>
		<updated>2016-12-01T11:40:32Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22463</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22463"/>
		<updated>2016-12-01T11:39:52Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel update without reboot'''&lt;br /&gt;
|Integrated ability to upgrade kernel or install security patches without downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, Rebootless Kernel Update&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22462</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22462"/>
		<updated>2016-12-01T11:37:15Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel Updates without reboot'''&lt;br /&gt;
|Ability to update Linux kernel or install security patches without reboot&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|N/A&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Integrated ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{Yes}}, kernel rebootless update (vzreboot)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|{{No}}, only 3d party tools&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22461</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22461"/>
		<updated>2016-12-01T11:32:47Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|N/A&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel Updates without reboot'''&lt;br /&gt;
|Ability to update Linux kernel or install security patches without reboot&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22460</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22460"/>
		<updated>2016-12-01T11:30:42Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, with new VCMMD memory management&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Online Memory Management for CT/VM'''&lt;br /&gt;
|Ability to change amount of RAM for CT and VM without reboot&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel Updates without reboot'''&lt;br /&gt;
|Ability to update Linux kernel or install security patches without reboot&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} &lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|{{Yes}}, CRIU for containers&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} Application Image Catalog [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22459</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=22459"/>
		<updated>2016-12-01T11:23:07Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding Virtuozzo 7 is provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! [https://virtuozzo.com/products/virtuozzo-containers/ Virtuozzo&amp;amp;nbsp;7]&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, [https://virtuozzo.com/support/pva/ Virtual Automator]&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernel-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=19800</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=19800"/>
		<updated>2016-07-26T10:35:55Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding [[Virtuozzo]] 7 are provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ (stable)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7 Plus&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, PVA&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernal-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|TBD&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Bitnami)&amp;lt;ref&amp;gt;[http://www.odin.com/fileadmin/media/hcap/virtuozzo/documents/Virtuozzo-app-catalog-Whitepaper_Ltr_20151015.pdf Image Management Using the Virtuozzo Application Catalog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[http://www.odin.com/support/policies/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|5 years of support&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19788</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19788"/>
		<updated>2016-07-25T08:49:32Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers.&lt;br /&gt;
&lt;br /&gt;
'''Please be aware that this feature is experimental and is not supported in production! We plan to make it production in the upcoming updates.'''&lt;br /&gt;
&lt;br /&gt;
'''This page is applicable for Virtuozzo 7''' (for Virtuozzo 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
To enable '''veth''' and '''overlay''' modules please run:&lt;br /&gt;
 modprobe veth&lt;br /&gt;
 modprobe overlay &lt;br /&gt;
&lt;br /&gt;
'''Note:''' if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
This is a temporary step, it will be dropped once overlayfs is proved to be absolutely safe to run in any vz7 Container.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported. Recommended driver is '''overlay'''. To enable '''overlayfs''' Storage Driver for docker engine inside CT please read here https://docs.docker.com/engine/userguide/storagedriver/selectadriver/&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network inside container:&lt;br /&gt;
 prlctl set $veid --features bridge:on&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 prlctl set $veid --device-add net --network Bridged --dhcp yes&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 prlctl set $veid --netfilter=full&lt;br /&gt;
&lt;br /&gt;
== Docker install ==&lt;br /&gt;
&lt;br /&gt;
To install docker inside container please use Docker Installation Guide for your OS&lt;br /&gt;
https://docs.docker.com/v1.11/engine/installation/&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19787</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19787"/>
		<updated>2016-07-25T08:22:35Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers.&lt;br /&gt;
&lt;br /&gt;
'''Please be aware that this feature is experimental and is not supported in production!'''&lt;br /&gt;
&lt;br /&gt;
'''This page is applicable for Virtuozzo 7''' (for Virtuozzo 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
To enable '''veth''' and '''overlay''' modules please run:&lt;br /&gt;
 modprobe veth&lt;br /&gt;
 modprobe overlay &lt;br /&gt;
&lt;br /&gt;
'''Note:''' if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
This is a temporary step, it will be dropped once overlayfs is proved to be absolutely safe to run in any vz7 Container.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported. Recommended driver is '''overlay'''. To enable '''overlayfs''' Storage Driver for docker engine inside CT please read here https://docs.docker.com/engine/userguide/storagedriver/selectadriver/&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network inside container:&lt;br /&gt;
 prlctl set $veid --features bridge:on&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 prlctl set $veid --device-add net --network Bridged --dhcp yes&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 prlctl set $veid --netfilter=full&lt;br /&gt;
&lt;br /&gt;
== Docker install ==&lt;br /&gt;
&lt;br /&gt;
To install docker inside container please use Docker Installation Guide for your OS&lt;br /&gt;
https://docs.docker.com/v1.11/engine/installation/&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19786</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19786"/>
		<updated>2016-07-25T08:22:01Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers.&lt;br /&gt;
&lt;br /&gt;
'''Please be aware that this feature is experimental and is not supported in production!'''&lt;br /&gt;
&lt;br /&gt;
'''This page is applicable for Virtuozzo 7''' (for OpenVZ 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
To enable '''veth''' and '''overlay''' modules please run:&lt;br /&gt;
 modprobe veth&lt;br /&gt;
 modprobe overlay &lt;br /&gt;
&lt;br /&gt;
'''Note:''' if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
This is a temporary step, it will be dropped once overlayfs is proved to be absolutely safe to run in any vz7 Container.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported. Recommended driver is '''overlay'''. To enable '''overlayfs''' Storage Driver for docker engine inside CT please read here https://docs.docker.com/engine/userguide/storagedriver/selectadriver/&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network inside container:&lt;br /&gt;
 prlctl set $veid --features bridge:on&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 prlctl set $veid --device-add net --network Bridged --dhcp yes&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 prlctl set $veid --netfilter=full&lt;br /&gt;
&lt;br /&gt;
== Docker install ==&lt;br /&gt;
&lt;br /&gt;
To install docker inside container please use Docker Installation Guide for your OS&lt;br /&gt;
https://docs.docker.com/v1.11/engine/installation/&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19785</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19785"/>
		<updated>2016-07-22T12:00:42Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for Virtuozzo 7''' (for OpenVZ 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
To enable '''veth''' and '''overlay''' modules please run:&lt;br /&gt;
 modprobe veth&lt;br /&gt;
 modprobe overlay &lt;br /&gt;
&lt;br /&gt;
'''Note:''' if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
This is a temporary step, it will be dropped once overlayfs is proved to be absolutely safe to run in any vz7 Container.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported. Recommended driver is '''overlay'''. To enable '''overlayfs''' Storage Driver for docker engine inside CT please read here https://docs.docker.com/engine/userguide/storagedriver/selectadriver/&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network inside container:&lt;br /&gt;
 prlctl set $veid --features bridge:on&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 prlctl set $veid --device-add net --network Bridged --dhcp yes&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 prlctl set $veid --netfilter=full&lt;br /&gt;
&lt;br /&gt;
== Docker install ==&lt;br /&gt;
&lt;br /&gt;
To install docker inside container please use Docker Installation Guide for your OS&lt;br /&gt;
https://docs.docker.com/v1.11/engine/installation/&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19775</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19775"/>
		<updated>2016-07-15T11:50:03Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
Current limitations (bugs, not implemented or by design):&lt;br /&gt;
#HA does not work.&lt;br /&gt;
#Virtuozzo Storage is not supported for containers and VMs in cinder. &lt;br /&gt;
&lt;br /&gt;
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.1.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node ==&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualiztion Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = kvm&lt;br /&gt;
 images_type = qcow2&lt;br /&gt;
 connection_uri = qemu:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19774</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19774"/>
		<updated>2016-07-15T11:46:11Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
Current limitations (bugs, not implemented or by design):&lt;br /&gt;
#HA does not work.&lt;br /&gt;
#Virtuozzo Storage is not supported for containers and VMs in cinder. &lt;br /&gt;
&lt;br /&gt;
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.1.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node ==&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualiztion Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
[libvirt]&lt;br /&gt;
...&lt;br /&gt;
virt_type = kvm&lt;br /&gt;
images_type = qcow2&lt;br /&gt;
connection_uri = qemu:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19773</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19773"/>
		<updated>2016-07-15T11:45:36Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
Current limitations (bugs, not implemented or by design):&lt;br /&gt;
#HA does not work.&lt;br /&gt;
#Virtuozzo Storage is not supported for containers and VMs in cinder. &lt;br /&gt;
&lt;br /&gt;
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here [http://www.example.com link title]https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.1.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node ==&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripblob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualiztion Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
[libvirt]&lt;br /&gt;
...&lt;br /&gt;
virt_type = kvm&lt;br /&gt;
images_type = qcow2&lt;br /&gt;
connection_uri = qemu:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19772</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19772"/>
		<updated>2016-07-15T11:44:17Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
Current limitations (bugs, not implemented or by design):&lt;br /&gt;
#HA does not work.&lt;br /&gt;
#Virtuozzo Storage is not supported for containers and VMs in cinder. &lt;br /&gt;
&lt;br /&gt;
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.1.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node ==&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Cluster. &lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show the virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the COMPUTE node. Please read script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
source vzrc --host_ip 10.24.41.26 --password Virtuozzo1!  --use_provider_network true --mode COMPUTE --controller 10.24.41.25 &lt;br /&gt;
&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 $ ./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
== How to change Virtualiztion Type to Virtual Machines on the Compute Node ==&lt;br /&gt;
&lt;br /&gt;
If you want to use virtual machines instead of containers on your compute node you need to change virtualization type to KVM on the selected compute node.&lt;br /&gt;
&lt;br /&gt;
Open nova configuration file:&lt;br /&gt;
 $ vi /etc/nova/nova.conf&lt;br /&gt;
&lt;br /&gt;
Change the following lines:&lt;br /&gt;
[libvirt]&lt;br /&gt;
...&lt;br /&gt;
virt_type = kvm&lt;br /&gt;
images_type = qcow2&lt;br /&gt;
connection_uri = qemu:///system&lt;br /&gt;
&lt;br /&gt;
Delete the line:&lt;br /&gt;
inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
Save the file.&lt;br /&gt;
&lt;br /&gt;
Restart nova-compute service:&lt;br /&gt;
 $ su stack&lt;br /&gt;
 $ screen -r&lt;br /&gt;
Press Ctrl-c&lt;br /&gt;
 $ sg libvirtd '/usr/bin/nova-compute --config-file /etc/nova/nova.conf' &amp;amp; echo $! &amp;gt;/vz/stack/status/stack/n-cpu.pid; fg || echo &amp;quot;n-cpu failed to start&amp;quot; | tee &amp;quot;/vz/stack/status/stack/n-cpu.failure&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To exit from screen session:&lt;br /&gt;
Press Ctrl+a+d&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19771</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19771"/>
		<updated>2016-07-15T11:37:14Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
Current limitations (bugs, not implemented or by design):&lt;br /&gt;
#HA does not work.&lt;br /&gt;
#Virtuozzo Storage is not supported for containers and VMs in cinder. &lt;br /&gt;
&lt;br /&gt;
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node, thus you need at least two nodes to try containers and VMs management.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 $ yum install git -y&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Controller Node with Virtuozzo Containers Support == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers support will be deployed. You can add another compute node with containers or VMs anytime you want as described in Setup OpenStack Compute Node section.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
If you want to use Virtuozzo Storage with OpenStack and your Virtuozzo Storage is running on another node and not on the compute you need to setup Virtuozzo Storage client and authorize compute node in the Virtuozzo Storage Cluster. &lt;br /&gt;
&lt;br /&gt;
Setup Virtuozzo Storage client:&lt;br /&gt;
 $ yum install vstorage-client -y&lt;br /&gt;
Check cluster discovery is working fine first: &lt;br /&gt;
 $ vstorage discover&lt;br /&gt;
Output will show the discovered clusters.&lt;br /&gt;
Now you need to authenticate controller node on the Virtuozzo Storage cluster:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME auth-node -P&lt;br /&gt;
Enter the virtuozzo storage cluster password and press Enter. &lt;br /&gt;
Check the cluster properties:&lt;br /&gt;
 $ vstorage -c $CLUSTER_NAME top&lt;br /&gt;
Output will show Virtuozzo storage cluster properties and state.&lt;br /&gt;
&lt;br /&gt;
Configure the script on the CONTROLLER node. Please read full script description here https://github.com/virtuozzo/virtuozzo-openstack-scripts/blob/master/README.md&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
 $ source vzrc --host_ip 10.24.41.25 --password Virtuozzo1! --use_provider_network true  --fixed_range 192.168.1.0/24 --floating_range 10.24.41.0/24 --floating_pool &amp;quot;start=10.24.41.151,end=10.24.41.199&amp;quot; --public_gateway 10.24.41.1 --gateway 192.168.0.1 --vzstorage vstorage1 --mode ALL &lt;br /&gt;
&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 $./setup_devstack_vz7.sh&lt;br /&gt;
&lt;br /&gt;
Installation can take up to 30 minutes depending on your Internet connection speed. Finished!&lt;br /&gt;
&lt;br /&gt;
== Setup OpenStack Compute Node ==&lt;br /&gt;
&lt;br /&gt;
Clone Virtuozzo scripts to your COMPUTE node:&lt;br /&gt;
# cd /vz&lt;br /&gt;
# git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
# cd /vz/virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19770</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19770"/>
		<updated>2016-07-15T11:29:00Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Virtuozzo 7 supports OpenStack as cloud management solution since version 6. With Virtuozzo 7 we are going to add a lot of new capabilities to OpenStack integration. &lt;br /&gt;
Current limitations (bugs, not implemented or by design):&lt;br /&gt;
#HA does not work.&lt;br /&gt;
#Virtuozzo Storage is not supported for containers and VMs in cinder. &lt;br /&gt;
&lt;br /&gt;
This guide allows you to install OpenStack with Virtuozzo nodes with a help of Devstack tools. Devstack allows you to install stateless OpenStack for demo purpose that means it will be reset after host reboot. So, the best platform to setup OpenStack in this case is virtual machines.&lt;br /&gt;
&lt;br /&gt;
You need the following infrastructure to setup OpenStack with Virtuozzo 7:&lt;br /&gt;
#controller host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a controller and Virtuozzo containers host.&lt;br /&gt;
#compute host: physical host or virtual machine with at least 4CPU, 8GB RAM, 150GB disk. This host will act as a virtual machines host.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo nodes first. Install Virtuozzo 7 on your controller and compute hosts as usual. You can use basic (local) or Virtuozzo Storage. Update Virtuozzo hosts&lt;br /&gt;
 $ yum update -y&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers supported will be deployed. You can add another compute node with containers or VMs anytime you want as described in Devstack Multi-node Installation section.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD CONTROLLER&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_COMPUTE_HOST_IP YOUR_PASSWORD COMPUTE YOUR_CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19769</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19769"/>
		<updated>2016-07-15T11:22:06Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
You need to install and update your Virtuozzo or OpenVZ nodes first.&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers supported will be deployed. You can add another compute node with containers or VMs anytime you want as described in Devstack Multi-node Installation section.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD CONTROLLER&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_COMPUTE_HOST_IP YOUR_PASSWORD COMPUTE YOUR_CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19756</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19756"/>
		<updated>2016-06-28T15:26:40Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on all your Virtuozzo nodes:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers supported will be deployed. You can add another compute node with containers or VMs anytime you want as described in Devstack Multi-node Installation section.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD CONTROLLER&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_COMPUTE_HOST_IP YOUR_PASSWORD COMPUTE YOUR_CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== Change iptables rules to allow access to Horizon Dashboard ==&lt;br /&gt;
&lt;br /&gt;
Remove the reject rule in iptables:&lt;br /&gt;
 iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19755</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19755"/>
		<updated>2016-06-28T15:26:10Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/vz.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on your Virtuozzo node:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers supported will be deployed. You can add another compute node with containers or VMs anytime you want as described in Devstack Multi-node Installation section.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD CONTROLLER&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_COMPUTE_HOST_IP YOUR_PASSWORD COMPUTE YOUR_CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== Change iptables rules to allow access to Horizon Dashboard ==&lt;br /&gt;
&lt;br /&gt;
Remove the reject rule in iptables:&lt;br /&gt;
 iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19647</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19647"/>
		<updated>2016-06-07T13:45:40Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/parallels.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on your Virtuozzo node:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers supported will be deployed. You can add another compute node with containers or VMs anytime you want as described in Devstack Multi-node Installation section.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD CONTROLLER&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_COMPUTE_HOST_IP YOUR_PASSWORD COMPUTE YOUR_CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== Change iptables rules to allow access to Horizon Dashboard ==&lt;br /&gt;
&lt;br /&gt;
Remove the reject rule in iptables:&lt;br /&gt;
 iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19641</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19641"/>
		<updated>2016-06-06T14:06:37Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/parallels.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on your Virtuozzo node:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
You are able to setup OpenStack controller node together with compute node on the same server for demo or test purpose. In this case compute node with Virtuozzo Containers supported will be deployed. You can add another compute node with containers or VMs anytime you want as described in Devstack Multi-node Installation section.&lt;br /&gt;
&lt;br /&gt;
Please note that OpenStack now does not support containers and virtual machines on the same node.&lt;br /&gt;
&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_HOST_IP YOUR_PASSWORD CONTROLLER&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh YOUR_COMPUTE_HOST_IP YOUR_PASSWORD COMPUTE YOUR_CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19598</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19598"/>
		<updated>2016-05-25T23:36:19Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/parallels.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
Git must be installed on your Virtuozzo node:&lt;br /&gt;
 yum install git&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh HOST_IP PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh HOST_IP PASSWORD true&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7_compute.sh COMPUTE_HOST_IP PASSWORD CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_Virtuozzo_in_the_Amazon_EC2&amp;diff=19597</id>
		<title>Using Virtuozzo in the Amazon EC2</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_Virtuozzo_in_the_Amazon_EC2&amp;diff=19597"/>
		<updated>2016-05-25T13:40:52Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
To allow customers to familiarize themselves with improved containers technology of Virtuozzo and to maximize AWS instance utilization along with security and isolation we introduce Virtuozzo image for Amazon EC2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:2--&amp;gt;&lt;br /&gt;
Please note that:&lt;br /&gt;
* Virtuozzo only supports containers then deployed on Amazon EC2.&lt;br /&gt;
* Virtuozzo 7 image is shipped for only one version: Virtuozzo 7.&lt;br /&gt;
&lt;br /&gt;
== Steps to provisioning == &amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
* Log into the [https://aws.amazon.com/marketplace AWS Marketplace], search for the AMI from Odin as the publisher, then click the selected product.&lt;br /&gt;
* Check the product description to verify it suits your needs. Then click the &amp;quot;Continue&amp;quot; button.&lt;br /&gt;
* You can choose the ''Manual Launch'' with EC2 console by the pressing corresponding tab, or continue with the 1-Click Launch using predefined settings (the 1-Click Launch option does not allow you to modify the default storage size and type when creating the instance: 30 GB magnetic storage. To change disk storage after deployment, see the Amazon AWS documentation -[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html Expanding the Storage Space of a Volume].&lt;br /&gt;
&lt;br /&gt;
=== 1-Click Launch (predefined settings) === &amp;lt;!--T:5--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:6--&amp;gt;&lt;br /&gt;
* Select the &amp;quot;Region&amp;quot; to deploy the instance and the EC2 Instance Type depending on your needs (Note: the price is different for different regions).&lt;br /&gt;
* In VPC settings, select where your instance will be deployed: EC2 classic (recommended) or your personal Virtual Private Cloud. If the VPC network is selected, please make sure that your virtual network is configured to provide internet access to the instance is being deployed. The main differences between EC2-classic and VPC are described in [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html Amazon EC2 and Amazon Virtual Private Cloud (VPC)]. Find more information about VPC in the [http://aws.amazon.com/documentation/vpc/ Amazon VPC documentation].&lt;br /&gt;
* Select the default options or create a new Security Group based on seller settings. Pay special attention to the ports that are required for Plesk - see Knowledgebase article [http://kb.odin.com/en/391 KB391: Which ports need to be opened for all Plesk services to work with a firewall?]&lt;br /&gt;
* Select the Key Pair to be used for connection to the instance (an existing Key Pair is required for connection to the OpenVZ instance). A Key Pair can be generated in the [https://console.aws.amazon.com/ec AWS Management Console].&lt;br /&gt;
* Click the Launch with 1-Click button.&lt;br /&gt;
By default, instances are deployed with small root storage (30 GB). It allows you deploy around 10 containers depends on container OS and installed packages. To deploy instances with bigger storage, use Manual Launch with EC2 console.&lt;br /&gt;
To change the disk storage after deployment, check the Amazon AWS documentation: [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html Expanding the Storage Space of a Volume].&lt;br /&gt;
&lt;br /&gt;
=== Manual Launch with EC2 console === &amp;lt;!--T:7--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:8--&amp;gt;&lt;br /&gt;
Adjust additional settings such as disk space before launch.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:9--&amp;gt;&lt;br /&gt;
* Click on the Manual Launch tab.&lt;br /&gt;
* Click the Launch with EC2 Console button in the Region the instance is to be deployed.&lt;br /&gt;
* In the opened EC2 Console, choose an Instance Type depending on your requirements. Then, click the Next: Configure Instance Details button.&lt;br /&gt;
* Set instance details. Here, you can select how many instances to deploy and select a Network (EC2-classic or VPC).If the VPC network is selected, please make sure that your virtual network is configured to provide internet access to the instance is being deployed. The main differences between EC2-classic and VPC are described in [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html Amazon EC2 and Amazon Virtual Private Cloud (VPC)]. Find more information about VPC in the [http://aws.amazon.com/documentation/vpc/ Amazon VPC documentation].&lt;br /&gt;
* Change other options if required, then click Next: Add Storage&lt;br /&gt;
* Add storage to your instance. It is recommended that you increase your disk storage from the default values - your disk will be automatically resized when the instance is deployed. To change disk storage after deployment, check the Amazon AWS documentation: [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html Expanding the Storage Space of a Volume]. You can also add more storages to your instance and change the storage volume types to increase performance. Find more information about the storage type and pricing in the Amazon AWS documentaion: [http://aws.amazon.com/ebs/details/ Amazon EBS Product Details]. Click Next: Tag Instance&lt;br /&gt;
* Add Tags for the instance. For example, you can define a tag with key = Name and value = openvz. Learn more about tagging your Amazon EC2 resources. Click Next: Configure Security Group&lt;br /&gt;
* Configure the security group. A security group is a set of firewall rules that control the traffic for your instance. It is recommended that you configure the security group depending on services you are going to server, follow [[Setting_up_an_iptables_firewall|steps to setup iptables]]. Click Next: Review Instance Launch&lt;br /&gt;
* Review your instance launch details. You can go back to edit changes for each section. Click Launch to assign a key pair to your instance and complete the launch process.&lt;br /&gt;
* When the instance is deployed, click the Visit Your Software link. The page with your subscription will be opened.&lt;br /&gt;
* Select [https://console.aws.amazon.com/ec2/v2/ Manage in the AWS console]. In the opened AWS Management Console, open your instances list (using the Instances link in the left menu) and select the instance.&lt;br /&gt;
* (Recommended) After every stop/start, your instance changes the external and internal IP pair. Thus we recommend to attach special Elastic IP to the instance. In the left menu, select Elastic IPs and Allocate New Address or select any existing unassociated address to be allocated to your instance. After Elastic IP attachment, reboot the instance and perform additional actions to configure Plesk (see the Changing IP Address section). Please find more information about Elastic IP on Amazon AWS documentation: [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html Elastic IP Addresses].&lt;br /&gt;
* To connect your instance please use connection via SSH as ec2user with the private key of the Keys Pair you deployed the instance with. For example:&lt;br /&gt;
 &lt;br /&gt;
 # ssh -i &amp;lt;path to private key&amp;gt; ec2user@&amp;lt;elastic or public IP&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:10--&amp;gt;&lt;br /&gt;
* To operate with OpenVZ you need to enter sudo mode:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
# sudo -i&lt;br /&gt;
&lt;br /&gt;
== Configure the external IP address for the container == &amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
To access to your container through the Internet you can attach additional Private IPs and Elastic IPs to the instance and then attach every Private IP to the specific container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Please review this article to learn how to assign additional Elastic IPs to the instance http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html. If you need additional information on IP addressing in Amazon EC2 please see this article http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Login to the Amazon EC2 Management Console.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
Assign new Private IP to your Instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Choose your instance;&lt;br /&gt;
* Click Actions &amp;gt; Networking &amp;gt; Manage Private IP Addresses;&lt;br /&gt;
* Click Assigh New IP;&lt;br /&gt;
* Click Yes, Update.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
Assign new Elastic IP to corresponding Private IP of your instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Choose Elastic IP section in the menu;&lt;br /&gt;
* Click Allocate New Addresses;&lt;br /&gt;
* Choose just created Elastic IP and click Actions &amp;gt; Associate Address;&lt;br /&gt;
* Choose your instance;&lt;br /&gt;
* Choose corresponding Private IP of your instance;&lt;br /&gt;
* Click Associate.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:20--&amp;gt;&lt;br /&gt;
Connect to your OpenVZ instance via SSH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:21--&amp;gt;&lt;br /&gt;
Create example container:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--T:22--&amp;gt;&lt;br /&gt;
# prlctl create 100700 --vmtype ct&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:23--&amp;gt;&lt;br /&gt;
Assign Private IP and DNS server to the container:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--T:24--&amp;gt;&lt;br /&gt;
# prlctl set 100700 --ipadd &amp;lt;Private IP address&amp;gt;/24&lt;br /&gt;
 # prlctl set 100700 --nameserver 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:25--&amp;gt;&lt;br /&gt;
Start the container:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--T:26--&amp;gt;&lt;br /&gt;
# prlctl start 100700&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:27--&amp;gt;&lt;br /&gt;
Enter the container and set root password:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--T:28--&amp;gt;&lt;br /&gt;
# prlctl enter 100700&lt;br /&gt;
 # passwd&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:29--&amp;gt;&lt;br /&gt;
Connect to the container via SSH:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--T:30--&amp;gt;&lt;br /&gt;
# ssh root@&amp;lt;Elastic IP Address&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configure NAT on the instance == &amp;lt;!--T:31--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:32--&amp;gt;&lt;br /&gt;
If you do not want to attach multiple Elastic IPs to your instance you may also to [[Using_NAT_for_container_with_private_IPs|configure internal NAT]] on your OpenVZ instance.&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:33--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:34--&amp;gt;&lt;br /&gt;
* [http://www.slideshare.net/kentaroebisawa/quick-start-guide-using-virtuozzo-7-on-aws-ec2 Quick Start Guide using Virtuozzo 7 (β) on AWS EC2]&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Installation]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19580</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19580"/>
		<updated>2016-05-24T18:40:59Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/parallels.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh HOST_IP PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh HOST_IP PASSWORD true&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7_compute.sh COMPUTE_HOST_IP PASSWORD CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== How to redeploy OpenStack on the same nodes ==&lt;br /&gt;
&lt;br /&gt;
Your OpenStack setup will be reset after node restart. To redeploy OpenStack on the same nodes do the following:&lt;br /&gt;
# &amp;lt;code&amp;gt;cd /vz/virtuozzo-openstack-scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
# &amp;lt;code&amp;gt;git pull&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run ./setup_devstack_vz7.sh with options you need.&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19579</id>
		<title>Setup OpenStack with Virtuozzo 7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Setup_OpenStack_with_Virtuozzo_7&amp;diff=19579"/>
		<updated>2016-05-24T18:30:06Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
This article describes how to install OpenStack on [[Virtuozzo]] 7.&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
IP connection tracking should be enabled for CT0. Please do the following:&lt;br /&gt;
#Open the file /etc/modprobe.d/parallels.conf&lt;br /&gt;
#Change the line &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=1&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;options nf_conntrack ip_conntrack_disable_ve0=0&amp;lt;/code&amp;gt;&lt;br /&gt;
#Reboot the system&lt;br /&gt;
&lt;br /&gt;
== Devstack all-in-one installation == &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
Run the script and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh HOST_IP PASSWORD&lt;br /&gt;
&lt;br /&gt;
== Devstack multi node installation == &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your CONTROLLER node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
Run the script on your CONTROLLER node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7.sh HOST_IP PASSWORD true&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
Clone virtuozzo scripts on your COMPUTE node:&lt;br /&gt;
&lt;br /&gt;
 $ cd /vz&lt;br /&gt;
 $ git clone https://github.com/virtuozzo/virtuozzo-openstack-scripts&lt;br /&gt;
 $ cd virtuozzo-openstack-scripts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
Run the script on your COMPUTE node and follow instructions (if any):&lt;br /&gt;
 &lt;br /&gt;
 $ ./setup_devstack_vz7_compute.sh COMPUTE_HOST_IP PASSWORD CONTROLLER_HOST_IP&lt;br /&gt;
&lt;br /&gt;
== Install and configure a nova controller node on [[Virtuozzo]] 7 == &amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
&lt;br /&gt;
* Change disk_formats string in /etc/glance/glance-api.conf so that it contains 'ploop'. Like this:&lt;br /&gt;
 &lt;br /&gt;
 disk_formats = ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop&lt;br /&gt;
&lt;br /&gt;
* Restart glance-api service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-glance-api.service&lt;br /&gt;
&lt;br /&gt;
* Download the container [http://updates.pvs.parallels.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz image]&lt;br /&gt;
* Unpack it&lt;br /&gt;
&lt;br /&gt;
 $ tar -xzvf centos7-exe.hds.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Upload the image to glance:&lt;br /&gt;
NOTE: this image was created for testing purposes only. Don't use it in production as is!&lt;br /&gt;
&lt;br /&gt;
 glance image-create --name centos7-exe --disk-format ploop --container-format bare --property vm_mode=exe --file centos7-exe.hds&lt;br /&gt;
&lt;br /&gt;
* Restart nova services:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-api.service \&lt;br /&gt;
  openstack-nova-cert.service openstack-nova-consoleauth.service \&lt;br /&gt;
  openstack-nova-scheduler.service openstack-nova-conductor.service \&lt;br /&gt;
  openstack-nova-novncproxy.service&lt;br /&gt;
&lt;br /&gt;
== Install and configure a compute node on [[Virtuozzo]] 7 == &amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
* Follow instructions on [http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html OpenStack.org]&lt;br /&gt;
* In addition to above instructions change /etc/nova/nova.conf:&lt;br /&gt;
&lt;br /&gt;
 [DEFAULT]&lt;br /&gt;
 ...&lt;br /&gt;
 vnc_keymap =&lt;br /&gt;
 force_raw_images = False&lt;br /&gt;
&lt;br /&gt;
 [libvirt]&lt;br /&gt;
 ...&lt;br /&gt;
 virt_type = parallels&lt;br /&gt;
 images_type = ploop&lt;br /&gt;
 connection_uri = parallels+unix:///system&lt;br /&gt;
 inject_partition = -2&lt;br /&gt;
&lt;br /&gt;
* Then restart nova-compute service:&lt;br /&gt;
&lt;br /&gt;
 systemctl restart openstack-nova-compute.service&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:100--&amp;gt;&lt;br /&gt;
* [http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html OpenStack installation guide]&lt;br /&gt;
* [https://docs.openvz.org/ Virtuozzo documentation]&lt;br /&gt;
* [[Virtuozzo ecosystem]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=19496</id>
		<title>Quick installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=19496"/>
		<updated>2016-04-29T10:49:56Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
{{Note|See [[Quick installation]] if you are looking to install the current stable version of OpenVZ.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:2--&amp;gt;&lt;br /&gt;
This document briefly describes the steps needed to install Virtuozzo Linux distribution on your machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
There are a few ways to install Virtuozzo:&lt;br /&gt;
&lt;br /&gt;
=== Bare-metal installation === &amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:5--&amp;gt;&lt;br /&gt;
OpenVZ project builds its own Linux distribution with both hypervisor and container virtualization.&lt;br /&gt;
It is based on our own Linux distribution, with the additions of [[Download/kernel/rhel7-testing|our custom kernel]], OpenVZ management utilities, [[QEMU]] and Virtuozzo installer. It is highly recommended to use OpenVZ containers and virtual machines with this Virtuozzo installation image. See [[Virtuozzo]].&lt;br /&gt;
[https://download.openvz.org/virtuozzo/releases/7.0-beta3/x86_64/iso/ Download] installation ISO image.&lt;br /&gt;
&lt;br /&gt;
=== Using Virtuozzo in the Vagrant box === &amp;lt;!--T:6--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:7--&amp;gt;&lt;br /&gt;
[https://www.vagrantup.com/ Vagrant] is a tool for creating reproducible and portable development environments.&lt;br /&gt;
It is easy to run environment with Virtuozzo using Vagrant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:8--&amp;gt;&lt;br /&gt;
* Download and [https://docs.vagrantup.com/v2/installation/ install Vagrant]&lt;br /&gt;
* Download and install [https://www.virtualbox.org/wiki/Downloads Virtualbox], Parallels Desktop, VMware Fusion or VMware Workstation. Please note that you need to enable nested virtualization support in your hypervisor to run virtual machines on Virtuozzo 7. VirtualBox does not officially support nested virtualization now.&lt;br /&gt;
* Download [https://atlas.hashicorp.com/OpenVZ/boxes/Virtuozzo-7.0 Virtuozzo box]:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:9--&amp;gt;&lt;br /&gt;
$ vagrant init OpenVZ/Virtuozzo-7.0&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:10--&amp;gt;&lt;br /&gt;
* Run box:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
$ vagrant up --provider virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
and in case of VMware hypervisor:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
$ vagrant up --provider vmware_desktop&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
and in case of Parallels hypervisor:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
$ vagrant up --provider parallels&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
* Attach to console:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
$ vagrant ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
* Use ''vagrant/vagrant'' to login inside box&lt;br /&gt;
&lt;br /&gt;
=== Using Virtuozzo in the Amazon EC2 === &amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:20--&amp;gt;&lt;br /&gt;
Follow steps in [[Using Virtuozzo in the Amazon EC2]].&lt;br /&gt;
&lt;br /&gt;
=== Setup on pre-installed Linux distribution === &amp;lt;!--T:21--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:22--&amp;gt;&lt;br /&gt;
{{Note|Pay attention, this installation method currently blocked by broken network after installation - {{OVZ|6454}}.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:23--&amp;gt;&lt;br /&gt;
Alternatively, one can install OpenVZ on a pre-installed RPM based Linux distribution.&lt;br /&gt;
Supported Linux distributions: Cloud Linux 7.*, CentOS 7.*, Scientific Linux 7.* etc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:24--&amp;gt;&lt;br /&gt;
Follow step-by-step instruction below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:25--&amp;gt;&lt;br /&gt;
Package ''virtuozzo-release'' will bring meta information and YUM repositories:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:26--&amp;gt;&lt;br /&gt;
# yum localinstall http://download.openvz.org/virtuozzo/releases/7.0/x86_64/os/Packages/v/virtuozzo-release-7.0.0-10.vz7.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:27--&amp;gt;&lt;br /&gt;
EPEL is a requisite:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:28--&amp;gt;&lt;br /&gt;
# yum install -y epel-release&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:29--&amp;gt;&lt;br /&gt;
Then install mandatory Virtuozzo RPM packages:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:30--&amp;gt;&lt;br /&gt;
# yum install -y prlctl prl-disp-service vzkernel&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:31--&amp;gt;&lt;br /&gt;
See OpenVZ [[Packages]] available in various Linux distributions.&lt;br /&gt;
&lt;br /&gt;
=== OpenVZ with upstream Linux kernel === &amp;lt;!--T:32--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:33--&amp;gt;&lt;br /&gt;
See article [[OpenVZ with upstream kernel]] if you want more details about support of upstream kernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Virtuozzo == &amp;lt;!--T:34--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:35--&amp;gt;&lt;br /&gt;
Page with [[screencasts]] shows demo with a few Virtuozzo commands. Feel free to add more.&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:36--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:37--&amp;gt;&lt;br /&gt;
* [https://docs.openvz.org/ Official Virtuozzo documentation]&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Installation]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=19370</id>
		<title>Quick installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=19370"/>
		<updated>2016-03-01T16:13:36Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
{{Note|See [[Quick installation]] if you are looking to install the current stable version of OpenVZ.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:2--&amp;gt;&lt;br /&gt;
This document briefly describes the steps needed to install Virtuozzo Linux distribution on your machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
There are a few ways to install Virtuozzo:&lt;br /&gt;
&lt;br /&gt;
=== Bare-metal installation === &amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:5--&amp;gt;&lt;br /&gt;
OpenVZ project builds its own Linux distribution with both hypervisor and container virtualization.&lt;br /&gt;
It is based on our own Linux distribution, with the additions of [[Download/kernel/rhel7-testing|our custom kernel]], OpenVZ management utilities, [[QEMU]] and Virtuozzo installer. It is highly recommended to use OpenVZ containers and virtual machines with this Virtuozzo installation image. See [[Virtuozzo]].&lt;br /&gt;
[https://download.openvz.org/virtuozzo/releases/7.0-beta3/x86_64/iso/ Download] installation ISO image.&lt;br /&gt;
&lt;br /&gt;
=== Using Virtuozzo in the Vagrant box === &amp;lt;!--T:6--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:7--&amp;gt;&lt;br /&gt;
[https://www.vagrantup.com/ Vagrant] is a tool for creating reproducible and portable development environments.&lt;br /&gt;
It is easy to run environment with Virtuozzo using Vagrant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:8--&amp;gt;&lt;br /&gt;
* Download and [https://docs.vagrantup.com/v2/installation/ install Vagrant]&lt;br /&gt;
* Download and install [https://www.virtualbox.org/wiki/Downloads Virtualbox], VMware Fusion or VMware Workstation&lt;br /&gt;
* Download [https://atlas.hashicorp.com/OpenVZ/boxes/Virtuozzo-7.0 Virtuozzo box]:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:9--&amp;gt;&lt;br /&gt;
$ vagrant init OpenVZ/Virtuozzo-7.0&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:10--&amp;gt;&lt;br /&gt;
* Run box:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
$ vagrant up --provider virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
and in case of VMware hypervisor:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
$ vagrant up --provider vmware_desktop&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
and in case of Parallels hypervisor:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
$ vagrant up --provider parallels&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
* Attach to console:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
$ vagrant ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
* Use ''vagrant/vagrant'' to login inside box&lt;br /&gt;
&lt;br /&gt;
=== Using Virtuozzo in the Amazon EC2 === &amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:20--&amp;gt;&lt;br /&gt;
Follow steps in [[Using Virtuozzo in the Amazon EC2]].&lt;br /&gt;
&lt;br /&gt;
=== Setup on pre-installed Linux distribution === &amp;lt;!--T:21--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:22--&amp;gt;&lt;br /&gt;
{{Note|Pay attention, this installation method currently blocked by broken network after installation - {{OVZ|6454}}.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:23--&amp;gt;&lt;br /&gt;
Alternatively, one can install OpenVZ on a pre-installed RPM based Linux distribution.&lt;br /&gt;
Supported Linux distributions: Cloud Linux 7.*, CentOS 7.*, Scientific Linux 7.* etc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:24--&amp;gt;&lt;br /&gt;
Follow step-by-step instruction below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:25--&amp;gt;&lt;br /&gt;
Package ''virtuozzo-release'' will bring meta information and YUM repositories:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:26--&amp;gt;&lt;br /&gt;
# yum localinstall http://download.openvz.org/virtuozzo/releases/7.0/x86_64/os/Packages/v/virtuozzo-release-7.0.0-10.vz7.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:27--&amp;gt;&lt;br /&gt;
EPEL is a requisite:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:28--&amp;gt;&lt;br /&gt;
# yum install -y epel-release&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:29--&amp;gt;&lt;br /&gt;
Then install mandatory Virtuozzo RPM packages:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:30--&amp;gt;&lt;br /&gt;
# yum install -y prlctl prl-disp-service vzkernel&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:31--&amp;gt;&lt;br /&gt;
See OpenVZ [[Packages]] available in various Linux distributions.&lt;br /&gt;
&lt;br /&gt;
=== OpenVZ with upstream Linux kernel === &amp;lt;!--T:32--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:33--&amp;gt;&lt;br /&gt;
See article [[OpenVZ with upstream kernel]] if you want more details about support of upstream kernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Virtuozzo == &amp;lt;!--T:34--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:35--&amp;gt;&lt;br /&gt;
Page with [[screencasts]] shows demo with a few Virtuozzo commands. Feel free to add more.&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:36--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:37--&amp;gt;&lt;br /&gt;
* [https://docs.openvz.org/ Official Virtuozzo documentation]&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Installation]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=17751</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=17751"/>
		<updated>2015-09-28T11:48:52Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding [[Virtuozzo]] 7 are provided by Odin. Here is the Odin's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Odin’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Odin commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ (stable)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7 Plus&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, PVA&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernal-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|SAN, NAS (NFS, ZFS), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - ploop&lt;br /&gt;
|CT - ploop, VM - ploop&lt;br /&gt;
|CT - ploop, VM - ploop\Qcow2&lt;br /&gt;
|CT - ploop, VM - ploop\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|TBD&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{VMs only}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Odin Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Bitnami)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Stack integration'''&lt;br /&gt;
|Driver for Open Stack Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|(LXC and KVM supported through libvirt)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=17750</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=17750"/>
		<updated>2015-09-28T11:47:05Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding [[Virtuozzo]] 7 are provided by Odin. Here is the Odin's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Odin’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Odin commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ (stable)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7 Plus&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, PVA&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernal-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|SAN, NAS (NFS, ZFS), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - ploop&lt;br /&gt;
|CT - ploop, VM - ploop&lt;br /&gt;
|CT - ploop, VM - ploop\Qcow2&lt;br /&gt;
|CT - ploop, VM - ploop\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|TBD&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{VMs only}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Odin Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Bitnami)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Stack integration'''&lt;br /&gt;
|Driver for Open Stack Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|(LXC and KVM supported through libvirt)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=17749</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=17749"/>
		<updated>2015-09-28T11:43:44Z</updated>

		<summary type="html">&lt;p&gt;Vporokhov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding [[Virtuozzo]] 7 are provided by Odin. Here is the Odin's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Odin’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Odin commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ (stable)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7 Plus&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, PVA&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration of VEs'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernal-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Automated Live Migration (DRS)'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all virtual machines onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage Migration'''&lt;br /&gt;
|Integrated Power Management features Ability to automatically migrate vms onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number vms per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|SAN, NAS (NFS, ZFS), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - ploop&lt;br /&gt;
|CT - ploop, VM - ploop&lt;br /&gt;
|CT - ploop, VM - ploop\Qcow2&lt;br /&gt;
|CT - ploop, VM - ploop\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|TBD&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{VMs only}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Odin Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Bitnami)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Stack integration'''&lt;br /&gt;
|Driver for Open Stack Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|(LXC and KVM supported through libvirt)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Vporokhov</name></author>
		
	</entry>
</feed>