Changes

Jump to: navigation, search

WP/What are containers

2,586 bytes added, 17:29, 15 March 2011
more stuff, try to remove VM information and really describe what a container is
= OpenVZ Linux Containers technology whitepaper =
OpenVZ is a virtualization technology for Linux, not unlike Xen, KVM, or VMware. As with other products, it which lets one to partition a single physical Linux machine into multiple smaller virtual machines. The difference is in technology used for partitioningunits called containers.Technically, it consists of three major things:
== Single kernel concept ==* Namespaces* Resource management* Checkpointing
Xen, KVM, VMware and other hypervisor-based products provide an ability to have multiple instances of virtual hardware (called VMs – Virtual Machines) on a single piece of real hardware. On top of that virtual hardware one can run any Operating System, so it's possible to run multiple different OSs on one single server. Each VM runs full software stack (including an OS kernel).== Namespaces ==
A namespace is a feature to limit the scope of something. Here, namespaces are used as containers building blocks. A simple care of a namespace is chroot. === Chroot ===[[Image:Chroot.png|right|200px]]Traditional UNIX <code>chroot()</code> system call is used to change the root of the file system of a calling process to a particular directory. That way it limits the scope of file system for the process, so it can only see and access a limited sub tree of files and directories. Chroot is still used for application isolation. For example, running ftpd in a chroot to avoid a potential security breach. === New namespaces === OpenVZ builds on a chroot idea and expands it to everything else that applications have. In contrastother words, every API that kernel provides to applications are "namespaced". Examples include: * File system namespace -- this one is chroot() itself. * PID namespace, so in every container processes have its own unique process IDs, OpenVZ uses and the first process inside a container have a PID of 1 (it is usually /sbin/init process which actually relies on its PID to be 1). Containers can only see their own processes, and they can't see (or access in any way, say by sending a singlesignal) processes in other containers. * IPC namespace, so every container have its own IPC (Inter-Process Communication) shared memory segments, semaphores, and messages. * Networking namespace, so every container have its own network devices, IP addresses, routing rules, firewall (iptables) rules, network caches and so on. * /proc and /sys namespaces, for every container to have their own representation of /proc and /sys --special filesystems used to export some kernel information to applications. In a nutshell, those are subsets of what a host system have. * FIXME moar moar moart == Single kernel approach== Multiple isolated containers are running on top of one single kernel. This is pretty much the same Linux kernel, just with added notion of containers. Basically,  All the containers running on a single piece of hardware share one single Linux kernel. There is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programs. This  Single kernel approach is much more lightweight light-weight than traditional VM-style virtualization. The consequences are:
# Waiving the need to run multiple OS kernels leads to '''higher density''' of containers (compared to VMs)
* Live migration
 
== Limitations ==
 
From the point of view of a container owner, it looks and feels like a real system. Nevertheless, it is important to understand what are container limitations:
 
* Container is constrained by limits set by host system administrator. That includes usage of CPU, memory, disk space and bandwidth, network bandwidth etc.t
 
* Container only runs Linux (Windows or FreeBSD is not an option), although different distributions is not an issue.
 
* Container can't boot/use its own kernel (it uses host system kernel).
 
* Container can't load its own kernel modules (it uses host system kernel modules).
 
* Container can't set system time, unless explicitly configured to do so (say to run <code>ntpd</code> in a CT).
 
* Container does not have direct access to hardware such as hard drive, network card, or a PCI device. Such access can be granted by host system administrator if needed.

Navigation menu