6,534
edits
Changes
fix a link
An operating system deployed inside VM assumes it it is running on top of the real hardware, therefore VM needs to emulate a real physical network card (NIC). All the traffic should go through both the real NIC and the virtual NIC. This creates some noticeable overhead, which could be mitigated by using paravirtualized NIC driver inside VM (i.e. driver that is aware of the hypervisor and therefore can use). Paravirtualized NIC driver can not always be used as it is specific to a guest OS/version. See [[Performance/Network Throughput]] for network performance comparison between OpenVZ and various hypervisor products.
In case of containers, no virtual NIC is needed (naturally, there can be no hardware drivers inside a container). Instead, a kernel is letting a container to use a network interface, which can or can not be directly attached to a real network card. The difference from VM case is there is no hardware emulation of any kind. See more at [[WP/What are containers#Networking|Networking chapter of What are containers]].
In terms of network topology organization, there are no big differences between VMs and containers. In both cases one can use route-based (OSI level 3) or bridge-based (OSI level 2) networking.
== Density ==
See [[WP/Containers density/]].
== Performance ==
See [[Performance]].