Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

WP/Containers vs VMs

1,263 bytes added, 17:51, 11 May 2011
+Networking
== Networking ==
 
An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See [[Performance/Network Throughput]] for network performance comparison between OpenVZ and various hypervisor products.
 
In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at [[What are containers#Networking|Networking chapter of What are containers]].
 
In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking.
== Memory management ==
 
In case of VMs, every guest operating system kernel "thinks" it is managing some hardware, including memory (RAM). It performs memory allocation, accounting, FIXME
== Density ==
There is a separate article which explains differences in technologies and why containers are better in high density environments See [[WP/Containers density/]].
== Performance ==
FIXME: link to performance paperSee [[Performance]].