Changes

Jump to: navigation, search

WP/What are containers

3,362 bytes added, 17:01, 16 March 2011
moar on resource management etc
= OpenVZ Linux Containers technology whitepaper =
OpenVZ is a virtualization technology for Linux, which lets one to partition a single physical Linux machine into multiple smaller units called containers.  Technically, it consists of three major thingsbuilding blocks:
* Namespaces
== Namespaces ==
A namespace is a feature to limit the scope of something. Here, namespaces are used as containers building blocks. A simple care case of a namespace is chroot.
=== Chroot ===
=== Other namespaces ===
OpenVZ builds on a chroot idea and expands it to everything else that applications have. In other words, every API that kernel provides to applications are "namespaced"«namespaced», making sure every container have its own isolated subset of a resource. Examples include:
* '''File system namespace -- ''' — this one is chroot() itself, making sure containers can't see each other's files.
* PID '''Process ID namespace''', so in every container processes have its own unique process IDs, and the first process inside a container have a PID of 1 (it is usually /sbin/init process which actually relies on its PID to be 1). Containers can only see their own processes, and they can't see (or access in any way, say by sending a signal) processes in other containers.
* '''IPC namespace''', so every container have its own IPC (Inter-Process Communication) shared memory segments, semaphores, and messages.
* '''Networking namespace''', so every container have its own network devices, IP addresses, routing rules, firewall (iptables) rules, network caches and so on.
* '''<code>/proc </code> and <code>/sys </code> namespaces''', for every container to have their own representation of <code>/proc </code> and <code>/sys -- </code> — special filesystems used to export some kernel information to applications. In a nutshell, those are subsets of what a host system have.
* FIXME moar moar moar'''UTS namespace''', so every container can have its own hostname.
Note that memory and CPU need not be namespaced. Existing virtual memory and multitask mechanisms are already taking care of it.
=== Single kernel approach ===
So, namespaces lets a single kernel run multiple isolated containers.To say put it againsimple, a container is a sum of all the containers running on a single piece of hardware share one single Linux kernelnamespaces.Yet againTherefore, there is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programscontainers, sharing that single kernel.
Single kernel approach is much more light-weight than traditional VM-style virtualization. The consequences of having only one kernel are:
# Waiving the need to run multiple OS kernels leads to '''higher density''' of containers (compared to VMs)
# Software stack that lies in between an application and the hardware is much thinner, this means higher performance of containers (compared to VMs)
# A container can only run Linux.
== Resource management ==
Due to a single kernel model used, there is one single entity which controls all of the resources: the kernel.All the containers share the same set of resources: CPU, memory, disk and network. All theseresources needs to be controlled on a per-container basis, for the containers to notstep on each other's toes. All such resources are accounted for and controlled by the kernel. It is important to understand that resources are not pre-allocated, but just limited. That means:* all the resources can be changed dynamically (run-time);* if a resource is not used, it it available. Let's see what resources are controlled and how. === CPU === Kernel CPU scheduler is modified to be containers-aware. When it is a time for a context switch, scheduler decides which task to give a CPU time slice to. Traditional scheduler just chooses one among all the runnable tasks in the system. OpenVZ scheduler implements two-level schema: it chooses a container first, then it chooses a task inside the container. That way, all the containers get a fair share of CPU resources (with no regard to number of processes inside each container). The following CPU scheduler settings are available per container: * '''CPU units''': a proportional "weight" of a container. The more units a container has, the more CPU it will get. Assuming we have 2 containers with equal CPU units, when both containers want CPU time (e.g. by running busy loops), each one will get 50%. In case we will double CPU units of one container, it will have two times more CPU (i.e. 66%, while another will take 33%). Note however that if other containers are idle, a single container can have as much as 100% of available CPU time. * '''CPU limit''': a hard limit on a share of CPU time. For example, if we set it to 50%, a container will not be able to use more than 50% of CPU time even if CPU will be idle otherwise. By default, this limit is not set, i.e. a single container can have as much as 100% of available CPU time. * '''CPU mask''': tells the kernel the exact CPUs that can be used to run this container on. This can also be used as a CPU limiting factor, and helps performance on a non-uniform memory (NUMA) systems. * '''VCPU affinity''': tells the kernel a maximum number of CPUs a container can use. The difference from the previous option is you are not able to specify the exact CPUs, only the number of those. === Disk === ==== Disk space ==== In a default setup, all containers reside on the same hard drive partition (since a container is just a subdirectory).OpenVZ introduces a per-container disk space limit to control disk usage. So, to increase the disk space availableto a container, one just needs to increase that limit -- dynamically, on the fly, without a need to resize a partitionor a filesystem. ==== Disk I/O priority ==== Containers compete for I/O operations, and can affect each other if they use the same disk drive. OpenVZintroduces a per-container I/O priority, which can be used to decrease the "bad guy" I/O rate in orderto not trash the other containers. ==== Disk I/O bandwidth === I/O bandwidth (in bytes per second) can be limited per-container (currently only available in commercial Parallels Virtuozzo Containers).
Every container can use all of the available hardware resources if configured so. From the other side, containersshould not step on each other's toes, so all the resources are accounted for and controlled by the kernel.=== Memory ===
'''FIXME link to resource management whitepaper goes here'''All the containers share the same physical memory and swap space, and other similar resources like a page cache.
== Live migration = Miscellaneous resources ===
= Various =Also, there are following per-container counters/limits:== Containers overhead ==* number of processes* number of opened files* number of iptables rules* number of sockets*
OpenVZ works almost as fast as a usual Linux system=== Read more ===Resource management is covered in greater details in [[.. The only overhead is for networking and additional resource /Resource management (see below), and in most cases it is negligible/]] whitepaper.
== OpenVZ host system scope Checkpointing and live migration ==
From the host system, all containers processes are visible.== Miscellaneous topics ==
== Resource control = Containers overhead ===
OpenVZ works almost as fast as a usual Linux system. The only overhead is for networking and additional resource management (see below), and in most cases it is negligible.
== Networking (routed/bridged) = OpenVZ host system scope ===
Does it differ much from VMs?From the host system, all containers processes are visible.
== Other features = Networking (routed/bridged) ===
* Live migrationDoes it differ much from VMs?
=== Limitations ===
From the point of view of a container owner, it looks and feels like a real system. Nevertheless, it is important to understand what are container limitations:

Navigation menu