Difference between revisions of "WP/Containers vs VMs"
(→Memory management: some more random ideas on MM) |
(fix examples of VM) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
= Containers versus Virtual Machines = | = Containers versus Virtual Machines = | ||
− | This article gives an overview of containers for those who know what a Virtual Machine (VM) is. VMs are implemented by products such as VMware, , Parallels | + | This article gives an overview of containers for those who know what a Virtual Machine (VM) is. VMs are implemented by products such as VMware, KVM, Xen, Parallels Desktop, Hyper-V, Virtual Box etc. If you do not have experience with those or similar products, better read [[../What are containers/]] whitepaper. |
Containers are very similar to VMs in a sense that they also let one to partition one physical computer into multiple small isolated partitions (called VMs or containers). | Containers are very similar to VMs in a sense that they also let one to partition one physical computer into multiple small isolated partitions (called VMs or containers). | ||
Line 31: | Line 31: | ||
An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See [[Performance/Network Throughput]] for network performance comparison between OpenVZ and various hypervisor products. | An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See [[Performance/Network Throughput]] for network performance comparison between OpenVZ and various hypervisor products. | ||
− | In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at [[What are containers#Networking|Networking chapter of What are containers]]. | + | In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at [[WP/What are containers#Networking|Networking chapter of What are containers]]. |
In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking. | In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking. | ||
Line 51: | Line 51: | ||
== Density == | == Density == | ||
− | See [[WP/Containers density | + | See [[WP/Containers density]]. |
== Performance == | == Performance == | ||
See [[Performance]]. | See [[Performance]]. |
Latest revision as of 20:46, 8 May 2015
Contents
Containers versus Virtual Machines[edit]
This article gives an overview of containers for those who know what a Virtual Machine (VM) is. VMs are implemented by products such as VMware, KVM, Xen, Parallels Desktop, Hyper-V, Virtual Box etc. If you do not have experience with those or similar products, better read What are containers whitepaper.
Containers are very similar to VMs in a sense that they also let one to partition one physical computer into multiple small isolated partitions (called VMs or containers). The difference is in technique used for such partitioning.
Some of the major differences between VMs and containers, as well as their consequences, are outlined below.
Single kernel concept[edit]
Xen, KVM, VMware and other hypervisor-based products provide an ability to have multiple instances of virtual hardware (called VMs – Virtual Machines) on a single piece of real hardware. On top of that virtual hardware one can run any Operating System, so it's possible to run multiple different OSs on one single server. Each VM runs full software stack (including an OS kernel).
In contrast, OpenVZ and containers technology uses a single-kernel approach. There is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programs. This approach is more lightweight than VM. The consequences are:
- Waiving the need to run multiple OS kernels results in higher density of containers (compared to VMs)
- Software stack that lies in between an application and the hardware is much thinner, this means higher performance of containers (compared to VMs)
File system[edit]
From file system point of view, a container is just a chroot()
environment. In other words, a container file system root is merely a directory on the host system (usually /vz/root/$CTID/, under which one can find usual directories like /etc
, /lib
, /bin
etc.). The consequences are:
- there is no need for a separate block device, hard drive partition or filesystem-in-a-file setup
- host system administrator can see all the containers' files
- containers backup/restore is trivial
- there is no I/O overhead (for VMs it can be as high as 1.5x to 3x, especially for small requests)
- mass deployment is easy
Networking[edit]
An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See Performance/Network Throughput for network performance comparison between OpenVZ and various hypervisor products.
In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at Networking chapter of What are containers.
In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking.
Memory management[edit]
In case of VMs, every guest operating system kernel "thinks" it is managing some hardware, including memory (RAM). It performs memory allocation, accounting and so on.
FIXME: describe memory sharing, balooning, swap through slow I/O etc. FIXME what else?
In contrast, containers' memory is managed by a single entity -- the kernel. Therefore:
- there is no need for balooning;
- sharing memory pages between containers is trivial;
- there is no need to preallocate the memory for a CT when it starts -- memory is allocated on demand;
- swapping is as fast as on a usual non-virtualized system.
In addition, latest version of OpenVZ kernel (based on RHEL6) is using VSwap technology. VSwap simplifies memory management FIXME
Density[edit]
Performance[edit]
See Performance.