User Guide/OpenVZ Philosophy
|Warning: This User's Guide is still in development|
- 1 About OpenVZ Software
- 2 Distinctive Features of OpenVZ
- 3 Main Principles of OpenVZ Operation
- 4 Hardware Node Availability Considerations
About OpenVZ Software
What is OpenVZ
OpenVZ is an container-based virtualization solution for Linux. OpenVZ creates multiple isolated partitions or containers (CTs) on a single physical server to utilize hardware, software, data center and management effort with maximum efficiency. Each CT performs and executes exactly like a stand-alone server for its users and applications as it can be rebooted independently and has its own root access, users, IP addresses, memory, processes, files, applications, system libraries, and configuration files. Light overhead and efficient design of OpenVZ makes it the right virtualization choice for production servers with live applications and real-life data.
The basic OpenVZ container capabilities are:
- Dynamic Real-time Partitioning — Partition a physical server into tens of CTs, each with full dedicated server functionality.
- Complete Isolation - Containers are secure and have full functional, fault and performance isolation.
- Dynamic Resource Allocation - CPU, memory, network, disk and I/O can be changed without rebooting.
- Mass Management — Manage a multitude of physical servers and containers in a unified way.
The OpenVZ containers virtualization model is streamlined for the best performance, management, and efficiency, maximizing the resource utilization.
OpenVZ provides a comprehensive solution allowing to:
- Have hundreds of users with their individual full-featured containers sharing a single physical server;
- Provide each user with a guaranteed Quality of Service;
- Transparently move users and their environments between servers, without any manual reconfiguration.
If you administer a number of Linux dedicated servers within an enterprise, each of which runs a specific service, you can use OpenVZ to consolidate all these servers onto a single computer without losing a bit of valuable information and without compromising performance. A container behave just like an isolated stand-alone server:
- Each Container has its own processes, users, files and provides full root shell access;
- Each Container has its own IP addresses, port numbers, filtering and routing rules;
- Each Container can have its own configuration for the system and application software, as well as its own versions of system libraries. It is possible to install or customize software packages inside a container independently from other CTs or the host system. Multiple distributions of a package can be run on one and the same Linux box.
In fact, hundreds of servers may be grouped together in this way. Besides the evident advantages of such consolidation (increased facility of administration and the like), there are some you might not even have thought of, say, cutting down electricity bills by many times!
OpenVZ proves invaluable for IT educational institutions that can now provide every student with a personal Linux server, which can be monitored and managed remotely. Software development companies may use Containers for testing purposes and the like.
Thus, OpenVZ can be efficiently applied in a wide range of areas: web hosting, enterprise server consolidation, software development and testing, user training, and so on.
Distinctive Features of OpenVZ
The concept of OpenVZ Containers is distinct from the concept of traditional virtual machines in the respect that Containers always run the same OS kernel as the host system (such as Linux on Linux, Windows on Windows, etc.). This single-kernel implementation technology allows to run Containers with a near-zero overhead. Thus, OpenVZ offers an order of magnitude higher efficiency and manageability than traditional virtualization technologies. OS Virtualization
From the point of view of applications and Container users, each Container is an independent system. This independency is provided by a virtualization layer in the kernel of the host OS. Note that only a negligible part of the CPU resources is spent on virtualization (around 1-2%). The main features of the virtualization layer implemented in OpenVZ are the following:
- Container looks like a normal Linux system. It has standard startup scripts, software from vendors can run inside Container without OpenVZ-specific modifications or adjustment;
- A user can change any configuration file and install additional software;
- Containers are fully isolated from each other (file system, processes, Inter Process Communication (IPC), sysctl variables);
- Containers share dynamic libraries, which greatly saves memory;
- Processes belonging to a Container are scheduled for execution on all available CPUs. Consequently, Containers are not bound to only one CPU and can use all available CPU power.
The OpenVZ network virtualization layer is designed to isolate Containers from each other and from the physical network:
- Each Container has its own IP address; multiple IP addresses per Container are allowed;
- Network traffic of a Container is isolated from the other Containers. In other words, Containers are protected from each other in the way that makes traffic snooping impossible;
- Firewalling may be used inside a Container (the user can create rules limiting access to some services using the canonical
iptablestool inside the Container). In other words, it is possible to set up firewall rules from inside a Container;
- Routing table manipulations are allowed to benefit from advanced routing features. For example, setting different maximum transmission units (MTUs) for different destinations, specifying different source addresses for different destinations, and so on.
An OS template in OpenVZ is basically a set of packages from some Linux distribution used to populate one or more Containers. With OpenVZ, different distributions can co-exist on the same hardware box, so multiple OS templates are available. An OS template consists of system programs, libraries, and scripts needed to boot up and run the system (Container), as well as some very basic applications and utilities. Applications like a compiler and an SQL server are usually not included into an OS template. For detailed information on OpenVZ templates, see the Understanding Templates section.
OpenVZ Resource Management controls the amount of resources available to Virtual Private Servers. The controlled resources include such parameters as CPU power, disk space, a set of memory-related parameters. Resource management allows OpenVZ to:
- Effectively share available Hardware Node resources among Containers;
- Guarantee Quality-of-Service (QoS) in accordance with a service level agreement (SLA);
- Provide performance and resource isolation and protect from denial-of-service attacks;
- Simultaneously assign and control resources for a number of Containers, etc.
- Resource Management is much more important for OpenVZ than for a standalone computer since computer resource utilization in an OpenVZ-based system is considerably higher than that in a typical system.
Main Principles of OpenVZ Operation
Basics of OpenVZ Technology
In this section we will try to let you form a more or less precise idea of the way the OpenVZ software operates on your computer. Please see the figure.
This figure presumes that you have a number of physical servers united into a network. In fact, you may have only one dedicated server to effectively use the OpenVZ software for the needs of your network. If you have more than one OpenVZ-based physical server, each one of the servers will have a similar architecture. In OpenVZ terminology, such servers are called Hardware Nodes (or just Nodes), because they represent hardware units within a network.
OpenVZ is installed on an already installed Linux system configured in a certain way. For example, such customized configuration shall include the creation of a
/vz partition, which is the basic partition for hosting Containers and which must be way larger than the root partition. This and similar configuration issues are most easily resolved during the Linux installation on the Hardware Node. Detailed instructions on installing Linux (called Host Operating System in the picture above) on the Hardware Node are provided in the Installation Guide.
OpenVZ is installed in such a way that you will be able to boot your computer either with OpenVZ support or without it. This support is usually presented as "openvz" in your boot loader menu and shown as OpenVZ Layer in the figure above.
However, at this point you are not yet able to create Containers. A Container is functionally identical to an isolated standalone server, having its own IP addresses, processes, files, users, its own configuration files, its own applications, system libraries, and so on. Containers share the same Hardware Node and the same OS kernel. However, they are isolated from each other. A Container is a kind of 'sandbox' for processes and users.
Different Containers can run different versions of Linux (for example, SUSE10 or Fedora 8 or Gentoo and many others). Each Container can run its own version of Linux. In this case we say that a Container is based on a certain OS template. OS templates are software packages available for download or created by you. Before you are able to create a Container, you should install the corresponding OS template. This is displayed as OpenVZ Templates in the scheme above.
After you have installed at least one OS template, you can create any number of Containers with the help of standard OpenVZ utilities, configure their network and/or other settings, and work with these Containers as with fully functional Linux servers.
The OpenVZ software allows you to flexibly configure various settings for the OpenVZ system in general as well as for each and every Container. Among these settings are disk and user quota, network parameters, default file locations and configuration sample files, and others.
OpenVZ stores the configuration information in two types of files: the global configuration file
/etc/vz/vz.conf and Container configuration files
/etc/vz/conf/CTID.conf. The global configuration file defines global and default parameters for Container operation, for example, logging settings, enabling and disabling disk quota for Containers, the default configuration file and OS template on the basis of which a new Container is created, and so on. On the other hand, a Container configuration file defines the parameters for a given particular Container, such as disk quota and allocated resources limits, IP address and host name, and so on. In case a parameter is configured both in the global OpenVZ configuration file, and in the Container configuration file, the Container configuration file takes precedence. For a list of parameters constituting the global configuration file and the Container configuration files, see vz.conf(5) and ctid.conf(5) manual pages.
The configuration files are read when the OpenVZ software and/or Containers are started. However, OpenVZ standard utilities, for example,
vzctl, allow you to change many configuration settings "on-the-fly", either without modifying the corresponding configuration files or with their modification (if you want the changes to apply the next time The OpenVZ software and/or Containers are started).
OpenVZ is free software. It consists of the OpenVZ kernel and user-level tools, which are licensed by means of two different open source licenses.
- The OpenVZ kernel is based on the Linux kernel and is thus licensed under GNU GPL version 2. The license text can be found at http://www.kernel.org/pub/linux/kernel/COPYING
- The user-level tools (vzctl, vzquota, and vzpkg) are currently licensed under the terms of the GNU GPL license version 2, or any later version. The license text of GNU GPL v2 can be found at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
Hardware Node Availability Considerations
Hardware Node availability is more critical than the availability of a typical PC server. Since it runs multiple Containers providing a number of critical services, Hardware Node outage might be very costly. Hardware Node outage can be as disastrous as the simultaneous outage of a number of servers running critical services.
In order to increase Hardware Node availability, we suggest you follow the recommendations below:
- Use RAID storage for critical Container private areas. Do prefer hardware RAID, but software mirroring RAID might suit too as a last resort.
- Do not run software on the Hardware Node itself. Create special Containers where you can host necessary services such as BIND, FTPD, HTTPD, and so on. On the Hardware Node itself, you need only the SSH daemon. Preferably, it should accept connections from a pre-defined set of IP addresses only.
- Do not create users on the Hardware Node itself. You can create as many users as you need in any Container. Remember, compromising the Hardware Node means compromising all Containers as well.