Difference between revisions of "WP/What are containers"

From OpenVZ Virtuozzo Containers Wiki
< WP
Jump to: navigation, search
(created (rough draft))
 
(more on single kernel approach)
Line 7: Line 7:
 
Xen, KVM, VMware and other hypervisor-based products provide an ability to have multiple instances of virtual hardware (called VMs – Virtual Machines) on a single piece of real hardware. On top of that virtual hardware one can run any Operating System, so it's possible to run multiple different OSs on one single server. Each VM runs full software stack (including an OS kernel).
 
Xen, KVM, VMware and other hypervisor-based products provide an ability to have multiple instances of virtual hardware (called VMs – Virtual Machines) on a single piece of real hardware. On top of that virtual hardware one can run any Operating System, so it's possible to run multiple different OSs on one single server. Each VM runs full software stack (including an OS kernel).
  
In contrast, OpenVZ uses a single-kernel approach. There is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programs. This approach is more lightweight than VM, leading to higher container density and performance.
+
In contrast, OpenVZ uses a single-kernel approach. There is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programs. This approach is more lightweight than VM. The consequences are:
 +
 
 +
# Waiving the need to run multiple OS kernels leads to '''higher density''' of containers (compared to VMs)
 +
# Software stack that lies in between an application and the hardware is much thinner, this means higher performance of containers (compared to VMs)
  
 
== Containers overhead ==
 
== Containers overhead ==
Line 41: Line 44:
 
== Other features ==
 
== Other features ==
  
Live migration.
+
* Live migration

Revision as of 14:46, 14 March 2011

OpenVZ Linux Containers technology whitepaper

OpenVZ is a virtualization technology for Linux, not unlike Xen, KVM, or VMware. As with other products, it lets one to partition a single physical machine into multiple smaller virtual machines. The difference is in technology used for partitioning.

Single kernel concept

Xen, KVM, VMware and other hypervisor-based products provide an ability to have multiple instances of virtual hardware (called VMs – Virtual Machines) on a single piece of real hardware. On top of that virtual hardware one can run any Operating System, so it's possible to run multiple different OSs on one single server. Each VM runs full software stack (including an OS kernel).

In contrast, OpenVZ uses a single-kernel approach. There is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programs. This approach is more lightweight than VM. The consequences are:

  1. Waiving the need to run multiple OS kernels leads to higher density of containers (compared to VMs)
  2. Software stack that lies in between an application and the hardware is much thinner, this means higher performance of containers (compared to VMs)

Containers overhead

OpenVZ works almost as fast as a usual Linux system. The only overhead is for networking and additional resource management (see below), and in most cases it is negligible.

File system

From file system point of view, a container is just a chroot() environment. In other words, a container file system root is merely a directory on the host system (usually /vz/root/$CTID/, under which one can find usual directories like /etc, /lib, /bin etc.). The consequences are:

  • there is no need for a separate block device, hard drive partition or filesystem-in-a-file setup
  • host system administrator can see all the containers' files
  • containers backup/restore is trivial
  • mass deployment is easy

OpenVZ host system scope

From the host system, all containers processes are visible.

Resource control

Due to a single kernel model used, all containers share the same set of resources: CPU, memory, disk and network.

Every container can use all of the available hardware resources if configured so. From the other side, containers should not step on each other's toes, so all the resources are accounted for and controlled by the kernel.

FIXME link to resource management whitepaper goes here

Networking (routed/bridged)

Does it differ much from VMs?

Other features

  • Live migration