Chroot is still used for application isolation. For example, running ftpd in a chroot to avoid a potential security breach.
=== New namespaces ===Chroot is also used in containers, which have the following consequences:
OpenVZ builds on * there is no need for a chroot idea and expands it to everything else that applications have. In other wordsseparate block device, every API that kernel provides to applications are "namespaced". Examples include:hard drive partition or filesystem-in-a-file setup* host system administrator can see all the containers' files* containers backup/restore is trivial* mass deployment is easy
=== Other namespaces === OpenVZ builds on a chroot idea and expands it to everything else that applications have. In other words, every API that kernel provides to applications are "namespaced", making sure every container have its own isolated subset of a resource. Examples include: * File system namespace -- this one is chroot() itself, making sure containers can't see each other's files.
* PID namespace, so in every container processes have its own unique process IDs, and the first process inside a container have a PID of 1 (it is usually /sbin/init process which actually relies on its PID to be 1). Containers can only see their own processes, and they can't see (or access in any way, say by sending a signal) processes in other containers.
* /proc and /sys namespaces, for every container to have their own representation of /proc and /sys -- special filesystems used to export some kernel information to applications. In a nutshell, those are subsets of what a host system have.
* FIXME moar moar moartmoar Note that memory and CPU need not be namespaced. Existing virtual memory and multitask mechanisms are already taking care of it.
== Single kernel approach ==
Multiple isolated containers are running on top of one So, namespaces lets a single kernel. This is pretty much the same Linux kernel, just with added notion of run multiple isolated containers. BasicallyTo say it again, All all the containers running on a single piece of hardware share one single Linux kernel. There Yet again, there is only one single OS kernel running, and on top of that there are multiple isolated instances of user-space programs.
Single kernel approach is much more light-weight than traditional VM-style virtualization. The consequences are:
# Software stack that lies in between an application and the hardware is much thinner, this means higher performance of containers (compared to VMs)
== Containers overhead Resource management == Due to a single kernel model used, all containers share the same set of resources: CPU, memory, disk and network. Every container can use all of the available hardware resources if configured so. From the other side, containersshould not step on each other's toes, so all the resources are accounted for and controlled by the kernel.
OpenVZ works almost as fast as a usual Linux system. The only overhead is for networking and additional '''FIXME link to resource management (see below), and in most cases it is negligible.whitepaper goes here'''
== File system Live migration ==
From file system point of view, a container is just a <code>chroot()</code> environment. In other words, a container file system root is merely a directory on the host system (usually /vz/root/$CTID/, under which one can find usual directories like <code>/etc</code>, <code>/lib</code>, <code>/bin</code> etc.). The consequences are:= Various === Containers overhead ==
* there OpenVZ works almost as fast as a usual Linux system. The only overhead is no need for a separate block devicenetworking and additional resource management (see below), hard drive partition or filesystem-and in-a-file setup* host system administrator can see all the containers' files* containers backup/restore is trivial* mass deployment most cases it is easynegligible.
== OpenVZ host system scope ==
== Resource control ==
Due to a single kernel model used, all containers share the same set of resources: CPU, memory, disk and network.
Every container can use all of the available hardware resources if configured so. From the other side, containers
should not step on each other's toes, so all the resources are accounted for and controlled by the kernel.
'''FIXME link to resource management whitepaper goes here'''
== Networking (routed/bridged) ==