Open main menu

OpenVZ Virtuozzo Containers Wiki β

Editing WP/Containers vs VMs

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
 
= Containers versus Virtual Machines =
 
= Containers versus Virtual Machines =
  
This article gives an overview of containers for those who know what a Virtual Machine (VM) is. VMs are implemented by products such as VMware, KVM, Xen, Parallels Desktop, Hyper-V, Virtual Box etc. If you do not have experience with those or similar products, better read [[../What are containers/]] whitepaper.
+
This article gives an overview of containers for those who know what a Virtual Machine (VM) is. VMs are implemented by products such as VMware, , Parallels, Xen, Hyper-V, Virtual Box etc. If you do not have experience with those or similar products, better read [[../What are containers/]] whitepaper.
  
 
Containers are very similar to VMs in a sense that they also let one to partition one physical computer into multiple small isolated partitions (called VMs or containers).
 
Containers are very similar to VMs in a sense that they also let one to partition one physical computer into multiple small isolated partitions (called VMs or containers).
Line 31: Line 31:
 
An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See [[Performance/Network Throughput]] for network performance comparison between OpenVZ and various hypervisor products.
 
An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See [[Performance/Network Throughput]] for network performance comparison between OpenVZ and various hypervisor products.
  
In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at [[WP/What are containers#Networking|Networking chapter of What are containers]].
+
In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at [[What are containers#Networking|Networking chapter of What are containers]].
  
 
In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking.
 
In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking.
Line 37: Line 37:
 
== Memory management ==
 
== Memory management ==
  
In case of VMs, every guest operating system kernel "thinks" it is managing some hardware, including memory (RAM). It performs memory allocation, accounting and so on.
+
In case of VMs, every guest operating system kernel "thinks" it is managing some hardware, including memory (RAM). It performs memory allocation, accounting, FIXME
 
 
FIXME: describe memory sharing, balooning, swap through slow I/O etc. FIXME what else?
 
 
 
In contrast, containers' memory is managed by a single entity -- the kernel. Therefore:
 
* there is no need for balooning;
 
* sharing memory pages between containers is trivial;
 
* there is no need to preallocate the memory for a CT when it starts -- memory is allocated on demand;
 
* swapping is as fast as on a usual non-virtualized system.
 
 
 
In addition, latest version of OpenVZ kernel (based on RHEL6) is using VSwap technology. VSwap simplifies memory management FIXME
 
  
 
== Density ==
 
== Density ==
  
See [[WP/Containers density]].
+
See [[WP/Containers density/]].
  
 
== Performance ==
 
== Performance ==
  
 
See [[Performance]].
 
See [[Performance]].

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)