Difference between revisions of "User Guide/Print version"
(Use Template:UG/Print entry) |
(remove "managing templates") |
||
Line 3: | Line 3: | ||
{{UG/Print entry|Installation and Preliminary Operations}} | {{UG/Print entry|Installation and Preliminary Operations}} | ||
{{UG/Print entry|Operations on Containers}} | {{UG/Print entry|Operations on Containers}} | ||
− | |||
{{UG/Print entry|Managing Resources}} | {{UG/Print entry|Managing Resources}} | ||
{{UG/Print entry|Advanced Tasks}} | {{UG/Print entry|Advanced Tasks}} | ||
{{UG/Print entry|Troubleshooting}} | {{UG/Print entry|Troubleshooting}} | ||
{{UG/Print entry|Reference}} | {{UG/Print entry|Reference}} |
Latest revision as of 12:37, 28 April 2009
Contents
- 1 Preface
- 2 OpenVZ Philosophy
- 3 Installation and Preliminary Operations
- 4 Operations on Containers
- 4.1 Creating New Container
- 4.2 Configuring Container
- 4.3 Starting, Stopping, Restarting, and Querying Status of Container
- 4.4 Listing Containers
- 4.5 Setting Name for Container
- 4.6 Storing Extended Information on Container
- 4.7 Migrating Container
- 4.8 Deleting Container
- 4.9 Disabling Container
- 4.10 Suspending Container
- 4.11 Running Commands in Container
- 5 Managing Resources
- 5.1 What are Resource Control Parameters?
- 5.2 Managing Disk Quotas
- 5.2.1 What are Disk Quotas?
- 5.2.2 Disk Quota Parameters
- 5.2.3 Turning On and Off Per-CT Disk Quotas
- 5.2.4 Setting Up Per-CT Disk Quota Parameters
- 5.2.5 Turning On and Off Second-Level Quotas for Container
- 5.2.6 Setting Up Second-Level Disk Quota Parameters
- 5.2.7 Checking Quota Status
- 5.2.8 Configuring Container Disk I/O Priority Level
- 5.3 Managing Container CPU resources
- 5.4 Managing System Parameters
- 5.5 Managing CT Resources Configuration
- 6 Advanced Tasks
- 7 Troubleshooting
- 8 Reference
Preface[edit]
About This Guide[edit]
This guide is meant to provide comprehensive information on OpenVZ — high-end server virtualization software for Linux-based computers. The issues discussed in this guide cover the necessary theoretical conceptions as well as practical aspects of working with OpenVZ. The guide will familiarize you with the way to create and administer containers (sometimes also called Virtual Environments, or VEs) on OpenVZ-based Hardware Nodes and to employ the command line interface for performing various tasks.
Who Should Read This Guide[edit]
The primary audience for this book is anyone responsible for administering one or more systems running OpenVZ. To fully understand the guide, you should have strong Linux system administration habits. Attending Linux system administration training courses might be helpful. Still, no more than superficial knowledge of Linux OS is required in order to comprehend the major OpenVZ notions and learn to perform the basic administrative operations.
Organization of This Guide[edit]
Chapter 2, OpenVZ Philosophy, is a must-read chapter that helps you grasp the general principles of OpenVZ operation. It provides an outline of OpenVZ architecture, of the way OpenVZ stores and uses configuration information, of the things you as administrator are supposed to perform, and the common way to perform them.
Chapter 3, Installation and Preliminary Operations, dwells on all those things that must be done before you are able to begin the administration proper of OpenVZ. Among these things are a customized installation of Linux on a dedicated computer (Hardware Node, in OpenVZ terminology), OpenVZ installation, preparation of the Hardware Node for creating Virtual Private Servers on it, etc.
Chapter 4, Operations on Virtual Private Servers, covers those operations that you may perform on a container as on a single entity: creating and deleting Virtual Private Servers, starting and stopping them, etc.
Chapter 5, Managing Resources, zeroes in on configuring and monitoring the resource control parameters for different containers. These parameters comprise disk quotas, disk I/O, CPU and system resources. Common ways of optimizing your containers configurations are suggested at the end of the chapter.
Chapter 6, Advanced Tasks, enumerates those tasks that are intended for advanced system administrators who would like to obtain deeper knowledge about OpenVZ capabilities.
Chapter 7, Troubleshooting, suggests ways to resolve common inconveniences should they occur during your work with the OpenVZ software.
Chapter 8, Reference, is a complete reference on all OpenVZ configuration files and Hardware Node command-line utilities. You should read this chapter if you do not understand a file format or looking for an explanation of a particular configuration option, if you need help for a particular command or looking for a command to perform a certain task.
Documentation Conventions[edit]
Before you start using this guide, it is important to understand the documentation conventions used in it. For information on specialized terms used in the documentation, see the Glossary at the end of this document.
Typographical Conventions[edit]
The following kinds of formatting in the text identify special information.
Formatting convention | Type of information | Example |
---|---|---|
Italics |
Used to emphasize the importance of a point or to introduce a term. | Such servers are called Hardware Nodes. |
|
The names of commands, files, and directories. | Use vzctl start to start a Container.
|
Preformatted |
On-screen computer output in your command-line sessions. |
Saved parameters for CT 101 |
Preformatted bold |
What you type, as contrasted with on-screen computer output. |
rpm -q quota |
Shell Prompts in Command Examples[edit]
Command line examples throughout this guide presume that you are using the Bourne-again shell (bash). Whenever a command can be run as a regular user, we will display it with a dollar sign prompt. When a command is meant to be run as root, we will display it with a hash mark prompt:
Ordinary user shell prompt | $ |
Root shell prompt | # |
General Conventions[edit]
Be aware of the following conventions used in this book.
- Chapters in this guide are divided into sections, which, in turn, are subdivided into subsections. For example, Documentation Conventions is a section, and General Conventions is a subsection.
- When following steps or using examples, be sure to type double-quotes ("), left single-quotes (`), and right single-quotes (') exactly as shown.
- The key referred to as RETURN is labeled ENTER on some keyboards.
The root path usually includes the /bin, /sbin, /usr/bin and /usr/sbin directories, so the steps in this book show the commands in these directories without absolute path names. Steps that use commands in other, less common, directories show the absolute paths in the examples.
Getting Help[edit]
In addition to this guide, there are a number of other resources available for OpenVZ which can help you use it more effectively. These resources include:
- User beancounters manual provides in-depth knowledge of UBC functioning and configuration.
- OpenVZ Wiki (http://wiki.openvz.org/) serves as a primary place to collect and share OpenVZ information.
- OpenVZ users mailing list is where users exchange questions and ideas.
- http://forum.openvz.org/ is OpenVZ support and discussion forum.
Feedback[edit]
If you spot a typo in this guide, or if you have thought of a way to make this guide better, we would love to hear from you!
If you have a suggestion for improving the documentation (or any other relevant comments), try to be as specific as possible when formulating it. If you have found an error, please include the chapter/section/subsection name and some of the surrounding text so we can find it easily.
Please submit a report by e-mail to userdocs@openvz.org.
OpenVZ Philosophy[edit]
About OpenVZ Software[edit]
What is OpenVZ[edit]
OpenVZ is an container-based virtualization solution for Linux. OpenVZ creates multiple isolated partitions or containers (CTs) on a single physical server to utilize hardware, software, data center and management effort with maximum efficiency. Each CT performs and executes exactly like a stand-alone server for its users and applications as it can be rebooted independently and has its own root access, users, IP addresses, memory, processes, files, applications, system libraries, and configuration files. Light overhead and efficient design of OpenVZ makes it the right virtualization choice for production servers with live applications and real-life data.
The basic OpenVZ container capabilities are:
- Dynamic Real-time Partitioning — Partition a physical server into tens of CTs, each with full dedicated server functionality.
- Complete Isolation - Containers are secure and have full functional, fault and performance isolation.
- Dynamic Resource Allocation - CPU, memory, network, disk and I/O can be changed without rebooting.
- Mass Management — Manage a multitude of physical servers and containers in a unified way.
The OpenVZ containers virtualization model is streamlined for the best performance, management, and efficiency, maximizing the resource utilization.
OpenVZ Applications[edit]
OpenVZ provides a comprehensive solution allowing to:
- Have hundreds of users with their individual full-featured containers sharing a single physical server;
- Provide each user with a guaranteed Quality of Service;
- Transparently move users and their environments between servers, without any manual reconfiguration.
If you administer a number of Linux dedicated servers within an enterprise, each of which runs a specific service, you can use OpenVZ to consolidate all these servers onto a single computer without losing a bit of valuable information and without compromising performance. A container behave just like an isolated stand-alone server:
- Each Container has its own processes, users, files and provides full root shell access;
- Each Container has its own IP addresses, port numbers, filtering and routing rules;
- Each Container can have its own configuration for the system and application software, as well as its own versions of system libraries. It is possible to install or customize software packages inside a container independently from other CTs or the host system. Multiple distributions of a package can be run on one and the same Linux box.
In fact, hundreds of servers may be grouped together in this way. Besides the evident advantages of such consolidation (increased facility of administration and the like), there are some you might not even have thought of, say, cutting down electricity bills by many times!
OpenVZ proves invaluable for IT educational institutions that can now provide every student with a personal Linux server, which can be monitored and managed remotely. Software development companies may use Containers for testing purposes and the like.
Thus, OpenVZ can be efficiently applied in a wide range of areas: web hosting, enterprise server consolidation, software development and testing, user training, and so on.
Distinctive Features of OpenVZ[edit]
The concept of OpenVZ Containers is distinct from the concept of traditional virtual machines in the respect that Containers always run the same OS kernel as the host system (such as Linux on Linux, Windows on Windows, etc.). This single-kernel implementation technology allows to run Containers with a near-zero overhead. Thus, OpenVZ offers an order of magnitude higher efficiency and manageability than traditional virtualization technologies. OS Virtualization
Containers Virtualization[edit]
From the point of view of applications and Container users, each Container is an independent system. This independency is provided by a virtualization layer in the kernel of the host OS. Note that only a negligible part of the CPU resources is spent on virtualization (around 1-2%). The main features of the virtualization layer implemented in OpenVZ are the following:
- Container looks like a normal Linux system. It has standard startup scripts, software from vendors can run inside Container without OpenVZ-specific modifications or adjustment;
- A user can change any configuration file and install additional software;
- Containers are fully isolated from each other (file system, processes, Inter Process Communication (IPC), sysctl variables);
- Containers share dynamic libraries, which greatly saves memory;
- Processes belonging to a Container are scheduled for execution on all available CPUs. Consequently, Containers are not bound to only one CPU and can use all available CPU power.
Network Virtualization[edit]
The OpenVZ network virtualization layer is designed to isolate Containers from each other and from the physical network:
- Each Container has its own IP address; multiple IP addresses per Container are allowed;
- Network traffic of a Container is isolated from the other Containers. In other words, Containers are protected from each other in the way that makes traffic snooping impossible;
- Firewalling may be used inside a Container (the user can create rules limiting access to some services using the canonical
iptables
tool inside the Container). In other words, it is possible to set up firewall rules from inside a Container; - Routing table manipulations are allowed to benefit from advanced routing features. For example, setting different maximum transmission units (MTUs) for different destinations, specifying different source addresses for different destinations, and so on.
Templates[edit]
An OS template in OpenVZ is basically a set of packages from some Linux distribution used to populate one or more Containers. With OpenVZ, different distributions can co-exist on the same hardware box, so multiple OS templates are available. An OS template consists of system programs, libraries, and scripts needed to boot up and run the system (Container), as well as some very basic applications and utilities. Applications like a compiler and an SQL server are usually not included into an OS template. For detailed information on OpenVZ templates, see the Understanding Templates section.
Resource Management[edit]
OpenVZ Resource Management controls the amount of resources available to Virtual Private Servers. The controlled resources include such parameters as CPU power, disk space, a set of memory-related parameters. Resource management allows OpenVZ to:
- Effectively share available Hardware Node resources among Containers;
- Guarantee Quality-of-Service (QoS) in accordance with a service level agreement (SLA);
- Provide performance and resource isolation and protect from denial-of-service attacks;
- Simultaneously assign and control resources for a number of Containers, etc.
- Resource Management is much more important for OpenVZ than for a standalone computer since computer resource utilization in an OpenVZ-based system is considerably higher than that in a typical system.
Main Principles of OpenVZ Operation[edit]
Basics of OpenVZ Technology[edit]
In this section we will try to let you form a more or less precise idea of the way the OpenVZ software operates on your computer. Please see the figure.
This figure presumes that you have a number of physical servers united into a network. In fact, you may have only one dedicated server to effectively use the OpenVZ software for the needs of your network. If you have more than one OpenVZ-based physical server, each one of the servers will have a similar architecture. In OpenVZ terminology, such servers are called Hardware Nodes (or just Nodes), because they represent hardware units within a network.
OpenVZ is installed on an already installed Linux system configured in a certain way. For example, such customized configuration shall include the creation of a /vz
partition, which is the basic partition for hosting Containers and which must be way larger than the root partition. This and similar configuration issues are most easily resolved during the Linux installation on the Hardware Node. Detailed instructions on installing Linux (called Host Operating System in the picture above) on the Hardware Node are provided in the Installation Guide.
OpenVZ is installed in such a way that you will be able to boot your computer either with OpenVZ support or without it. This support is usually presented as "openvz" in your boot loader menu and shown as OpenVZ Layer in the figure above.
However, at this point you are not yet able to create Containers. A Container is functionally identical to an isolated standalone server, having its own IP addresses, processes, files, users, its own configuration files, its own applications, system libraries, and so on. Containers share the same Hardware Node and the same OS kernel. However, they are isolated from each other. A Container is a kind of 'sandbox' for processes and users.
Different Containers can run different versions of Linux (for example, SUSE10 or Fedora 8 or Gentoo and many others). Each Container can run its own version of Linux. In this case we say that a Container is based on a certain OS template. OS templates are software packages available for download or created by you. Before you are able to create a Container, you should install the corresponding OS template. This is displayed as OpenVZ Templates in the scheme above.
After you have installed at least one OS template, you can create any number of Containers with the help of standard OpenVZ utilities, configure their network and/or other settings, and work with these Containers as with fully functional Linux servers.
OpenVZ Configuration[edit]
The OpenVZ software allows you to flexibly configure various settings for the OpenVZ system in general as well as for each and every Container. Among these settings are disk and user quota, network parameters, default file locations and configuration sample files, and others.
OpenVZ stores the configuration information in two types of files: the global configuration file /etc/vz/vz.conf
and Container configuration files /etc/vz/conf/CTID.conf
. The global configuration file defines global and default parameters for Container operation, for example, logging settings, enabling and disabling disk quota for Containers, the default configuration file and OS template on the basis of which a new Container is created, and so on. On the other hand, a Container configuration file defines the parameters for a given particular Container, such as disk quota and allocated resources limits, IP address and host name, and so on. In case a parameter is configured both in the global OpenVZ configuration file, and in the Container configuration file, the Container configuration file takes precedence. For a list of parameters constituting the global configuration file and the Container configuration files, see vz.conf(5) and ctid.conf(5) manual pages.
The configuration files are read when the OpenVZ software and/or Containers are started. However, OpenVZ standard utilities, for example, vzctl
, allow you to change many configuration settings "on-the-fly", either without modifying the corresponding configuration files or with their modification (if you want the changes to apply the next time The OpenVZ software and/or Containers are started).
Licensing[edit]
OpenVZ is free software. It consists of the OpenVZ kernel and user-level tools, which are licensed by means of two different open source licenses.
- The OpenVZ kernel is based on the Linux kernel and is thus licensed under GNU GPL version 2. The license text can be found at http://www.kernel.org/pub/linux/kernel/COPYING
- The user-level tools (vzctl, vzquota, and vzpkg) are currently licensed under the terms of the GNU GPL license version 2, or any later version. The license text of GNU GPL v2 can be found at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
Hardware Node Availability Considerations[edit]
Hardware Node availability is more critical than the availability of a typical PC server. Since it runs multiple Containers providing a number of critical services, Hardware Node outage might be very costly. Hardware Node outage can be as disastrous as the simultaneous outage of a number of servers running critical services.
In order to increase Hardware Node availability, we suggest you follow the recommendations below:
- Use RAID storage for critical Container private areas. Do prefer hardware RAID, but software mirroring RAID might suit too as a last resort.
- Do not run software on the Hardware Node itself. Create special Containers where you can host necessary services such as BIND, FTPD, HTTPD, and so on. On the Hardware Node itself, you need only the SSH daemon. Preferably, it should accept connections from a pre-defined set of IP addresses only.
- Do not create users on the Hardware Node itself. You can create as many users as you need in any Container. Remember, compromising the Hardware Node means compromising all Containers as well.
Installation and Preliminary Operations[edit]
The current chapter provides exhaustive information on the process of installing and deploying your OpenVZ system including the pre-requisites and the stages you shall pass.
Installation Requirements[edit]
After deciding on the structure of your OpenVZ system, you should make sure that all the Hardware Nodes where you are going to deploy OpenVZ for Linux meet the following system (hardware and software) and network requirements.
System Requirements[edit]
This section focuses on the hardware and software requirements for the OpenVZ for Linux software product.
Hardware Compatibility[edit]
The Hardware Node requirements for the standard 32-bit edition of OpenVZ are the following:
- IBM PC-compatible computer;
- Intel Celeron, Pentium II, Pentium III, Pentium 4, Xeon, or AMD Athlon CPU;
- At least 128 MB of RAM;
- Hard drive(s) with at least 4 GB of free disk space;
- Network card (either Intel EtherExpress100 (i82557-, i82558- or i82559-based) or 3Com (3c905 or 3c905B or 3c595) or RTL8139-based are recommended).
The computer should satisfy the Red Hat Enterprise Linux or Fedora Core hardware requirements (please, see the hardware compatibility lists at www.redhat.com).
The exact computer configuration depends on how many Virtual Private Servers you are going to run on the computer and what load these VPSs are going to produce. Thus, in order to choose the right configuration, please follow the recommendations below:
- CPUs. The more Virtual Private Servers you plan to run simultaneously, the more CPUs you need.
- Memory. The more memory you have, the more Virtual Private Servers you can run. The exact figure depends on the number and nature of applications you are planning to run in your Virtual Private Servers. However, on the average, at least 1 GB of RAM is recommended for every 20-30 Virtual Private Servers;
- Disk space. Each Virtual Private Server occupies 400–600 MB of hard disk space for system files in addition to the user data inside the Virtual Private Server (for example, web site content). You should consider it when planning disk partitioning and the number of Virtual Private Servers to run.
A typical 2–way Dell PowerEdge 1650 1u–mountable server with 1 GB of RAM and 36 GB of hard drives is suitable for hosting 30 Virtual Private Servers.
Software Compatibility[edit]
The Hardware Node should run either Red Hat Enterprise Linux 6 or 5, or CentOS 6 or 5, or Scientific Linux 6 or 5. The detailed instructions on installing these operating systems for the best performance of OpenVZ are provided in the next sections.
This requirement does not restrict the ability of OpenVZ to provide other Linux versions as an operating system for Virtual Private Servers. The Linux distribution installed in a Virtual Private Server may differ from that of the host OS.
Network Requirements[edit]
The network pre-requisites enlisted in this subsection will help you avoid delays and problems with making OpenVZ for Linux up and running. You should take care in advance of the following:
- Local Area Network (LAN) for the Hardware Node;
- Internet connection for the Hardware Node;
- Valid IP address for the Hardware Node as well as other IP parameters (default gateway, network mask, DNS configuration);
- At least one valid IP address for each Virtual Private Server. The total number of addresses should be no less than the planned number of Virtual Private Servers. The addresses may be allocated in different IP networks;
- If a firewall is deployed, check that IP addresses allocated for Virtual Private Servers are open for access from the outside.
Installing and Configuring Host Operating System on Hardware Node[edit]
This section explains how to install Fedora Core 4 on the Hardware Node and how to configure it for OpenVZ. If you are using another distribution, please consult the corresponding installation guides about the installation specifics.
Choosing System Type[edit]
Please follow the instructions from your Installation Guide when installing the OS on your Hardware Node. After the first several screens, you will be presented with a screen specifying the installation type. OpenVZ requires Server System to be installed, therefore select “Server” at the dialog shown in the figure below.
Figure 2: Fedora Core Installation - Choosing System Type It is not recommended to install extra packages on the Hardware Node itself due to the all-importance of Hardware Node availability (see the #Hardware Node Availability Considerations subsection in this chapter). You will be able to run any necessary services inside dedicated Virtual Private Servers.
Disk Partitioning[edit]
On the Disk Partitioning Setup screen, select Manual partition with Disk Druid. Do not choose automatic partitioning since this type of partitioning will create a disk layout intended for systems running multiple services. In case of OpenVZ, all your services shall run inside Virtual Private Servers.
Figure 3: Fedora Core Installation - Choosing Manual Partitioning
Create the following partitions on the Hardware Node:
Partition | Description | Typical size |
---|---|---|
/ | 4–12 Gb | |
swap | Paging partition for the Linux operating system | 2 times RAM or RAM + 2GB depending on available HD space |
/vz | Partition to host OpenVZ templates and Virtual Private Servers | all the remaining space on the hard disk |
Many of the historical specifications for partitioning are outdated in an age where all hard drives are well above 20GB. So all minimums can be increased without any impact if you have plenty of drive space. It is suggested to use the ext3 or ext4 file system for the /vz partition. This partition is used for holding all data of the Virtual Private Servers existing on the Hardware Node. Allocate as much disk space as possible to this partition. It is not recommended to use the reiserfs file system as it is proved to be less stable than the ext3, and stability is of paramount importance for OpenVZ-based computers.
The root partition will host the operating system files. Fresh CentOS 6 install with basic server packages + OpenVZ kernel can occupy up to approximately 2 GB of disk space, so 4 GB is a good minimal size of the root partition. If you have plenty of drive space and think you may add additional software to the Node such as monitoring software then consider using more. Historically, the recommended size of the swap partition has been two times the size of physical RAM installed. Now, with minimum server RAM often above 2GB a more reasonable specification might be RAM + 2GB if RAM is above 2GB and HD space is limited.
Finishing OS Installation[edit]
After the proper partitioning of your hard drive(s), proceed in accordance with your OS Installation Guide. While on the Network Configuration screen, you should ensure the correctness of the Hardware Node’s IP address, host name, DNS, and default gateway information. If you are using DHCP, make sure that it is properly configured. If necessary, consult your network administrator. On the Firewall Configuration screen, choose No firewall. Option Enable SELinux should be set to Disabled.
Fedora Core Installation - Disabling Firewall and SELinux After finishing the installation and rebooting your computer, you are ready to install OpenVZ on your system.
Installing OpenVZ Software[edit]
Downloading and Installing OpenVZ Kernel[edit]
First of all, you should download the kernel binary RPM from http://openvz.org/download/kernel/. You need only one kernel RPM, so please choose the appropriate kernel binary depending on your hardware:
if you use Red Hat Enterprise 5, or Centos 5, or Scientific Linux 5:
- If there is more than one CPU available on your Hardware Node (or a CPU with hyperthreading), select the vzkernel-smp RPM.
- If there is more than 4 Gb of RAM available, select the vzkernel-enterprise RPM.
- Otherwise, select the uniprocessor kernel RPM (vzkernel-version).
if you use Red Hat Enterprise 6, or Centos 6, or Scientific Linux 6:
- select the uniprocessor kernel RPM (vzkernel-version).
Next, you shall install the kernel RPM of your choice on your Hardware Node by issuing the following command:
# rpm -ihv vzkernel-name*.rpm
Note: You should not use the rpm –U command (where -U stands for "upgrade"); otherwise, all the kernels currently installed on the Node will be removed. |
Configuring Boot Loader[edit]
In case you use the GRUB loader, it will be configured automatically. You should only make sure that the lines below are present in the /boot/grub/grub.conf file on the Node:
title Fedora Core (2.6.8-022stab029.1) root (hd0,0) kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5 quiet rhgb initrd /initrd-2.6.8-022stab029.1.img
However, we recommend that you configure this file in the following way:
- Change Fedora Core to OpenVZ (just for clarity, so the OpenVZ kernels will not be mixed up with non OpenVZ ones).
- Remove all extra arguments from the kernel line, leaving only the root=... parameter.
At the end, the modified grub.conf file should look as follows:
title OpenVZ (2.6.8-022stab029.1) root (hd0,0) kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5 initrd /initrd-2.6.8-022stab029.1.img
Setting sysctl parameters[edit]
There are a number of kernel limits that should be set for OpenVZ to work correctly. OpenVZ is shipped with a tuned /etc/sysctl.conf file. Below are the contents of the relevant part of /etc/sysctl.conf:
# On Hardware Node we generally need # packet forwarding enabled and proxy arp disabled net.ipv4.ip_forward = 1 net.ipv4.conf.default.proxy_arp = 0 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # TCP Explict Congestion Notification net.ipv4.tcp_ecn = 0 # we do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0
Please edit the file as described. To apply the changes issue the following command:
# sysctl -p
Alternatively, the changes will be applied upon the following reboot.
It is also worth mentioning that normally you should have forwarding (net.ipv4.ip_forward) turned on since the Hardware Node forwards the packets destined to or originating from the Virtual Private Servers.
After that, you should reboot your computer and choose "OpenVZ" on the boot loader menu.
Downloading and Installing OpenVZ Packages[edit]
After you have successfully installed and booted the OpenVZ kernel, you can proceed with installing the user-level tools for OpenVZ. You should install the following OpenVZ packages:
- vzctl: this package is used to perform different tasks on the OpenVZ Virtual Private Servers (create, destroy, start, stop, set parameters etc.).
- vzquota: this package is used to manage the VPS quotas.
You can download the corresponding binary RPMs from http://openvz.org/download/utils/.
On the next step, you should install these utilities by using the following command:
# rpm –Uhv vzctl*.rpm vzquota*.rpm
Now you can launch OpenVZ. To this effect, execute the following command:
# /etc/init.d/vz start
This will load all the needed OpenVZ kernel modules. During the next reboot, this script will be executed automatically.
Installing OS Templates[edit]
Template is a set of package files to be installed into a Container. Operating system templates are used to create new Containers with a pre-installed operating system. Therefore, you are bound to download at least one OS template and put it into /vz/template/cache/
directory on the Hardware Node.
Links to all available OS templates at listed at Download/template/precreated.
For example, this is how to download the CentOS 5 OS template:
# cd /vz/template/cache # wget http://download.openvz.org/template/precreated/centos-5-x86.tar.gz
Operations on Containers[edit]
This chapter describes how to perform day-to-day operations on separate Containers taken in their wholeness.
Note: We assume that you have successfully installed, configured, and deployed your OpenVZ system. In case you have not, please turn to the Installation Guide providing detailed information on all these operations. |
Creating New Container[edit]
This section guides you through the process of creating a Container. We assume that you have successfully installed OpenVZ and prepared at least one OS template. If there are no OS templates prepared for the Container creation, turn to the Templates Management Guide first.
Before You Begin[edit]
Before you start creating a Container, you should:
- Check that the Hardware Node is visible on your network. You should be able to connect to/from other hosts. Otherwise, your Containers will not be accessible from other servers.
- Check that you have at least one IP address per Container and the addresses belong to the same network as the Hardware Node or routing to the Containers has been set up via the Hardware Node.
To create a new Container, you have to:
- choose the new Container ID;
- choose the OS template to use for the Container;
- create the Container itself.
Choosing Container ID[edit]
Every Container has a numeric ID, also known as Container ID, associated with it. The ID is a 32-bit integer number beginning with zero and unique for a given Hardware Node. When choosing an ID for your Container, please follow the simple guidelines below:
- ID 0 is used for the Hardware Node itself. You cannot and should not try to create a Container with ID 0.
- The OpenVZ software reserves the IDs ranging from 0 to 100. Though OpenVZ uses only ID 0, future versions might use additional Container IDs for internal needs. To facilitate upgrading, please do not create Containers with IDs below 101.
The only strict requirement for a Container ID is to be unique for a particular Hardware Node. However, if you are going to have several computers running OpenVZ, we recommend assigning different Container ID ranges to them. For example, on Hardware Node 1 you create Containers within the range of IDs from 101 to 1000; on Hardware Node 2 you use the range from 1001 to 2000, and so on. This approach makes it easier to remember on which Hardware Node a Container has been created, and eliminates the possibility of Container ID conflicts when a Container migrates from one Hardware Node to another.
Another approach to assigning Container IDs is to follow some pattern of Container IP addresses. Thus, for example, if you have a subnet with the 10.0.x.x address range, you may want to assign the 17015 ID to the Container with the 10.0.17.15 IP address, the 39108 ID to the Container with the 10.0.39.108 IP address, and so on. This makes it much easier to run a number of OpenVZ utilities eliminating the necessity to check up the Container IP address by its ID and similar tasks. You can also think of your own patterns for assigning Container IDs depending on the configuration of your network and your specific needs.
Before you decide on a new Container ID, you may want to make sure that no Container with this ID has yet been created on the Hardware Node. The easiest way to check whether the Container with the given ID exists is to issue the following command:
# vzlist -a 101 Container not found
This output shows that Container 101 does not exist on the particular Hardware Node; otherwise it would be present in the list.
Choosing OS Template[edit]
Before starting to create a Container, you shall decide on which OS template your Container will be based. There might be several OS templates installed on the Hardware Node and prepared for the Container creation; use the following command to find out what OS templates are available on your system:
# ls /vz/template/cache/ centos-4-x86.tar.gz fedora-7-x86.tar.gz suse-10.3-x86.tar.gz centos-4-x86_64.tar.gz fedora-7-x86_64.tar.gz suse-10.3-x86_64.tar.gz centos-5-x86.tar.gz fedora-8-x86.tar.gz ubuntu-7.10-x86.tar.gz centos-5-x86_64.tar.gz fedora-8-x86_64.tar.gz ubuntu-7.10-x86_64.tar.gz debian-3.1-x86.tar.gz fedora-9-x86.tar.gz ubuntu-8.04-x86.tar.gz debian-4.0-x86.tar.gz fedora-9-x86_64.tar.gz ubuntu-8.04-x86_64.tar.gz debian-4.0-x86_64.tar.gz
Note: You have to remove the .tar.gz suffix from the name to make it a valid OS template name. For example, centos-4-x86 is a valid OS template name.
|
Creating Container[edit]
After the Container ID and the installed OS template have been chosen, you can create the Container private area with the vzctl create
command. The private area is the directory containing the actual files of the given Container; it is usually residing in /vz/private/CTID/
. The private area is mounted to the /vz/root/CTID
directory on the Hardware Node and provides Container users with a complete Linux file system tree.
The vzctl create
command requires only the Container ID and the name of the OS template as arguments; however, in order to avoid setting all the Container resource control parameters after creating the private area, you can specify a sample configuration to be used for your new Container. The sample configuration files are residing in the /etc/vz/conf
directory and have names with the following mask: ve-configname.conf-sample
. The most commonly used sample is the ve-basic.conf-sample
file; this sample file has resource control parameters suitable for most Containers.
Thus, for example, you can create a new Container by typing the following string:
# vzctl create 101 --ostemplate centos-5-x86 -–config basic Creating container private area (centos-5-x86) Performing postcreate actions Container private area was created
In this case, the OpenVZ software will create a Container with ID 101, the private area based on the centos-5-x86
OS template, and configuration parameters taken from the ve-basic.conf-sample
sample configuration file.
If you specify neither an OS template nor a sample configuration, vzctl
will try to take the corresponding values from the global OpenVZ configuration file (/etc/vz/vz.conf
). So you can set the default values in this file using your favorite text file editor, for example:
DEF_OSTEMPLATE="centos-5-x86" CONFIGFILE="basic"
and do without specifying these parameters each time you create a new Container.
Now you can create a Container with ID 101 with the following command:
# vzctl create 101 Creating container private area (centos-5-x86) Performing postcreate actions Container private area was created
In principle, now you are ready to start your newly created Container. However, typically you need to set its network IP address, hostname, DNS server address and root password before starting the Container for the first time.
Configuring Container[edit]
Configuring a Container consists of several tasks:
- Setting Container startup parameters;
- Setting Container network parameters;
- Setting Container user passwords;
- Configuring Quality of Service (Service Level) parameters.
For all these tasks, the vzctl set
command is used. Using this command for setting Container startup parameters, network parameters, and user passwords is explained later in this subsection. Service Level Management configuration topics are dwelled upon in the Managing Resources chapter.
Setting Startup Parameters[edit]
The vzctl set
command allows you to define the onboot
Container startup parameter. Setting this parameter to yes
makes your Container automatically boot at the Hardware Node startup. For example, to enable Container 101 to automatically start on your Hardware Node boot, you can execute the following command:
# vzctl set 101 --onboot yes --save Saved parameters for CT 101
The onboot
parameter will have effect only on the next Hardware Node startup.
Setting Network Parameters[edit]
In order to be accessible from the network, a Container shall be assigned a correct IP address and hostname; DNS servers shall also be configured. In addition, the SSH daemon shall be running inside the Container. The session below illustrates setting the Container 101 network parameters:
# vzctl set 101 --hostname server101.mydomain.com --save Set hostname: server101.mydomain.com Saved parameters for CT 1010101 # vzctl set 101 --ipadd 10.0.186.1 --save Adding IP address(es): 10.0.186.1 Saved parameters for CT 1010101 # vzctl set 101 --nameserver 192.168.1.165 --save File resolv.conf was modified Saved parameters for CT 1010101
These commands will assign Container 101 the IP address of 10.0.186.1
, the hostname of server101.mydomain.com
, and set the DNS server address to 192.168.1.165
. The –-save
flag instructs vzctl to also save all the parameters set to the Container configuration file.
You can issue the above commands when the Container is running. In this case, if you do not want the applied values to persist, you can omit the –-save
option and the applied values will be valid only until the Container shutdown.
To check whether SSH is running inside the Container, use vzctl exec
, which allows executing any commands in the Container context.
# vzctl start 101 [This command starts Container 101, if it is not started yet] # vzctl exec 101 service sshd status sshd is stopped # vzctl exec 101 service sshd start Starting sshd: [ OK ] # vzctl exec 101 service sshd status sshd (pid 3801) is running...
The above example assumes that Container 101 is created on the CentOS 5 template. For other OS templates, please consult the corresponding OS documentation.
For more information on running commands inside a Container from the Hardware Node, see the #Running Commands in Container subsection.
Setting root Password for Container[edit]
Setting the root user password is necessary for connecting to a Container via SSH. By default, the root account is locked in a newly created Container, and you cannot log in. In order to log in to the Container, it is necessary to create a user account inside the Container and set a password for this account, or unlock the root account. The easiest way of doing it is to run:
# vzctl set 101 --userpasswd root:test
In this example, we set the root password for Container 101 to "test", and you can log in to the Container via SSH as root and administer it in the same way as you administer a standalone Linux server: install additional software, add users, set up services, and so on. The password will be set inside the Container in the /etc/shadow
file in an encrypted form and will not be stored in the Container configuration file. Therefore, if you forget the password, you have to reset it. Note that --userpasswd
ignores the --save
switch, the password is persistently set for the given Container.
While you can create users and set passwords for them using the vzctl exec
or vzctl set
commands, it is suggested that you delegate user management to the Container administrator advising him/her of the Container root account password.
Starting, Stopping, Restarting, and Querying Status of Container[edit]
When a Container is created, it may be started up and shut down like an ordinary server. To start Container 101, use the following command:
# vzctl start 101 Starting container ... Container is mounted Adding IP address(es): 10.0.186.1 Setting CPU units: 1000 Configure meminfo: 65536 Set hostname: server101.mydomain.com File resolv.conf was modified Container start in progress...
To check the status of a Container, use the vzctl status
command:
# vzctl status 101 CTID 101 exist mounted running
Its output shows the following information:
- Whether the Container private area exists;
- Whether this private area is mounted;
- Whether the Container is running.
In our case, vzctl
reports that Container 101 exists, its private area is mounted, and the Container is running. Alternatively, you can make use of the vzlist
utility:
# vzlist 101 CTID NPROC STATUS IP_ADDR HOSTNAME 101 10 running 10.0.186.1 server101.mydomain.com
Still another way of getting the Container status is checking the /proc/vz/veinfo
file. This file lists all the Containers currently running on the Hardware Node. Each line presents a running Container in the <CT_ID> <reserved> <number_of_processes> <IP_address> ... format:
# cat /proc/vz/veinfo 101 0 10 10.0.186.1 0 0 79
This output shows that Container 101 is running, there are 10 running processes inside the Container, and its IP address is 10.0.186.1. The second line corresponds to the Container with ID 0, which is the Hardware Node itself.
The following command is used to stop a Container:
# vzctl stop 101 Stopping container ... Container was stopped Container is unmounted # vzctl status 101 CTID 101 exist unmounted down
vzctl
has a two-minute timeout for the Container shutdown scripts to be executed. If the Container is not stopped in two minutes, the system forcibly kills all the processes in the Container. The Container will be stopped in any case, even if it is seriously damaged. To avoid waiting for two minutes in case of a Container that is known to be corrupt, you may use the --fast
switch:
# vzctl stop 101 --fast Stopping container ... Container was stopped Container is unmounted
Make sure that you do not use the --fast
switch with healthy Containers, unless necessary, as the forcible killing of Container processes may be potentially dangerous.
The vzctl start
and vzctl stop
commands initiate the normal Linux OS startup or shutdown sequences inside the Container. In case of a Red Hat-like distribution, System V initialization scripts will be executed just like on an ordinary server. You can customize startup scripts inside the Container as needed.
To restart a Container, you may as well use the vzctl restart
command:
# vzctl restart 101 Restarting container Stopping container ... Container was stopped Container is unmounted Starting container ... Container is mounted Adding IP address(es): 10.0.186.1 Setting CPU units: 1000 Configure meminfo: 65536 Set hostname: server101.mydomain.com File resolv.conf was modified Container start in progress...
Note: You can also use Container names to start, stop, and restart the corresponding Containers. For detailed information on Container names, please turn to the #Setting Name for Container section. |
Listing Containers[edit]
Very often you may want to get an overview of the Containers existing on the given Hardware Node and to get additional information about them — their IP addresses, hostnames, current resource consumption, etc. In the most general case, you may get a list of all Containers by issuing the following command:
# vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 101 10 running 10.101.66.101 server101.mydomain.com 102 - stopped 10.101.66.102 server102.mydomain.com 103 5 running 10.101.66.103 server103.mydomain.com
The -a
switch tells the vzlist
utility to output both running and stopped Containers. By default, only running Containers are shown. The default columns inform you of the Container IDs, the number of running processes inside Containers, their status, IP addresses, and hostnames. This output may be customized as desired by using vzlist
command line switches. For example:
# vzlist -o ctid,diskinodes.s -s diskinodes.s CTID DQINODES.S 101 400000 103 200000
This shows only running Containers with the information about their IDs and soft limit on disk inodes (see the Managing Resources chapter for more information), with the list sorted by this soft limit. The full list of the vzlist
command line switches and output and sorting options is available in the vzlist(8) man page.
Setting Name for Container[edit]
You can assign an arbitrary name to your Container and use it, along with the Container ID, to refer to the Container while performing this or that Container-related operation on the Hardware Node. For example, you can start or stop a Container by specifying the Container name instead of its ID.
You can assign names to your Containers using the --name
option of the vzctl set
command. For example, to set the computer1
name for Container 101, you should execute the following command:
# vzctl set 101 --name computer1 --save Name computer1 assigned Saved parameters for Container 101
You can also set a name for Container 101 by editing its configuration file. In this case you should proceed as follows:
1. Open the configuration file of Container 101 (/etc/vz/conf/101.conf
) for editing and add the following string to the file:
NAME="computer1"
2. In the /etc/vz/names
directory on the Hardware Node, create a symbolic link with the name of computer1
pointing to the Container configuration file. For example:
# ln --symbolic /etc/vz/conf/101.conf /etc/vz/names/computer1
When specifying names for Containers, please keep in mind the following:
- Names may contain the following symbols:
a
-z
,A
-Z
,0
-9
, underscores (_
), dashes (-
), spaces, the symbols from the ASCII character table with their code in the 128–255 range, and all the national alphabets included in the Unicode code space. - Container names cannot consist of digits only; otherwise, there would be no way to distinguish them from Container IDs.
- If it contains one or more spaces, the Container name should be put in single or double quotes, or the spaces have to be escaped by preceding them with a backslashes (
\
).
After the name has been successfully assigned to Container 101, you can start using it instead of ID 101 to perform Container-related operations on the Node. For example:
- You can stop Container 101 with the following command:
# vzctl stop computer1 Stopping container ... Container was stopped Container is unmounted
- You can start Container 101 anew by issuing the following command:
# vzctl start computer1 Starting container ... ...
You can find out what name is assigned to Container 101 in one of the following ways:
- Using the
vzlist
utility:
# vzlist -o name 101 NAME computer1
- Checking the
NAME
parameter in the Container configuration file (/etc/vz/conf/101.conf
). For example:
# grep NAME= /etc/vz/conf/101.conf NAME="computer1"
- Checking which symlink in the
/etc/vz/names/
directory links to Container configuration file. The file name of the symlink is the name for Container. For example:
# ls -l /etc/vz/names/ | grep /101.conf lrwxrwxrwx 1 root root 21 Jan 16 20:18 computer1 -> /etc/vz/conf/101.conf
Storing Extended Information on Container[edit]
Note: This feature is available since vzctl-3.0.23. |
Sometimes, it may be difficult to remember the information on certain Containers. The probability of this increases together with the number of Containers and with the time elapsed since the Container creation. The OpenVZ software allows you to set the description of any Container on the Hardware Node and view it later on, if required. The description can be any text containing any Container-related information; for example, you can include the following in the Container description:
- the owner of the Container;
- the purpose of the Container;
- the summary description of the Container;
- etc.
Let us assume that you are asked to create a Container for a Mr. Johnson who is going to use it for hosting the MySQL server. So, you create Container 101 and, after that, execute the following command on the Hardware Node:
# vzctl set 101 --description "Container 101 > owner - Mr. Johnson > purpose - hosting the MySQL server" --save Saved parameters for CT 101
This command saves the following information related to the Container: its ID, owner, and the purpose of its creation. At any time, you can display this information by issuing the following command:
# vzlist -o description 101 DESCRIPTION Container 101 owner - Mr. Johnson purpose - hosting the MySQL server
You can also view the Container description by checking the DESCRIPTION
parameter of the Container configuration file (/etc/vz/conf/101.conf
). However, the data stored in this file are more suitable for parsing by the vzlist
command rather than for viewing by a human since all symbols in the DESCRIPTION
field except the alphanumerical ones ('a-z', 'A-Z', and '0-9'), underscores ('_'), and dots ('.') are transformed to the corresponding hex character code.
While working with Container descriptions, please keep in mind the following:
- You can use any symbols you like in the Container description (new lines, dashes, underscores, spaces, etc.).
- If the Container description contains one or more spaces or line breaks (as in the example above), it should be put in single or double quotes.
- As distinct from a Container name, a Container description cannot be used for performing Container-related operations (e.g. for starting or stopping a Container) and is meant for reference purposes only.
Migrating Container[edit]
The OpenVZ Hardware Node is the system with higher availability requirements in comparison with a typical Linux system. If you are running your company mail server, file server, and web server in different Containers on one and the same Hardware Node, then shutting it down for hardware upgrade will make all these services unavailable at once. To facilitate hardware upgrades and load balancing between several Hardware Nodes, the OpenVZ software provides you with the ability to migrate Containers from one physical box to another.
Migrating Containers is possible if OpenVZ is installed on two or more Hardware Nodes, so you are able to move a Container to another Node. Migration may be necessary if a Hardware Node is undergoing a planned maintenance or in certain other cases.
Standard (offline) migration[edit]
The standard migration procedure allows you to move both stopped and running Containers. Migrating a stopped Container includes copying all Container private files from one Node to another and does not differ from copying a number of files from one server to another over the network. In its turn, the migration procedure of a running Container is a bit more complicated and may be described as follows:
- After initiating the migration process, all Container private data are copied to the Destination Node. During this time, the Container on the Source Node continues running.
- The Container on the Source Node is stopped.
- The Container private data copied to the Destination Node are compared with those on the Source Node and, if any files were changed during the first migration step, they are copied to the Destination Node again and rewrite the outdated versions.
- The Container on the Destination Node is started.
WARNING: By default, after the migration process is completed, the Container private area and configuration file are deleted on the Source Node! However, if you wish the Container private area on the Source Node to not be removed after the successful Container migration, you can override the default vzmigrate
behavior by using the –r no
switch.
There is a short downtime needed to stop the Container on the Source Node, copy the Container private data changes to the Destination Node, and start the Container on the Destination Node. However, this time is very short and does not usually exceed one minute.
The following session moves Container 101 from the current Hardware Node to a new one named ts7.mydomain.com:
# vzmigrate ts7.mydomain.com 101 Starting migration of container 101 on ts7.mydomain.com Preparing remote node Initializing remote quota Syncing private Syncing 2nd level quota Turning quota off Cleanup
Note: For the command to be successful, a direct SSH connection (on port 22) should be allowed between the Source and Destination Nodes. |
Zero-downtime (online) migration[edit]
OpenVZ allows you to migrate your Containers from one Hardware Node to another with zero downtime. The zero downtime migration technology has the following main advantages as compared with the standard one:
- The process of migrating a Container to another Node is transparent for you and the Container applications and network connections, i.e., on the Source and Destination Nodes, no modifications of system characteristics and operational procedures inside the Container are performed.
- The Container migration time is greatly reduced. In fact, the migration eliminates the service outage or interruption for Container end users.
- The Container is restored on the Destination Node in the same state as it was at the beginning of the migration.
- You can move the Containers running a number of applications which you do not want to be rebooted during the migration for some reason or another.
Note: Zero-downtime migration cannot be performed on Containers having one or several opened sessions established with the vzctl enter CTID command.
|
Before performing zero-downtime migration, it is recommended to synchronize the system time on the Source and Destination Nodes, e.g. by means of NTP (http://www.ntp.org). The reason for this recommendation is that some processes running in the Container might rely on the system time being monotonic and thus might behave unpredictably if they see an abrupt step forward or backward in the time once they find themselves on the new Node with different system clock parameters.
To migrate a Container by using the zero downtime migration technology, you should pass the --online
option to the vzmigrate
utility. In this case a Container is 'dumped' at the beginning of the migration, i.e. all Container private data including the state of all running processes are saved to an image file. This image file is then transferred to the Destination Node where it is 'undumped'.
For example, you can migrate Container 101 from the current Hardware Node to the Destination Node named my_node.com by executing the following command:
# vzmigrate --online my_node.com 101
Deleting Container[edit]
You can delete a Container that is not needed anymore with the vzctl destroy CTID
command. This command removes the Container private area completely and renames the Container configuration file and action scripts by appending the .destroyed
suffix to them.
A running Container cannot be destroyed with the vzctl destroy
command. The example below illustrates destroying Container 101:
# vzctl destroy 101 Container is currently running. Stop it first. # vzctl stop 101 Stopping container ... Container was stopped Container is unmounted # vzctl destroy 101 Destroying container private area: /vz/private/101 Container private area was destroyed # ls /etc/vz/conf/101.* /etc/vz/conf/101.conf.destroyed # vzctl status 101 CTID 102 deleted unmounted down
If you do not need the backup copy of the Container configuration files (with the .destroyed suffix), you may delete them manually.
Disabling Container[edit]
There may appear situations when you wish to forbid Container owners to use their Containers. For example, it may happen in case the Container owner uses it for unallowed purposes: intruding into computers of other users, participating in DoS attacks, etc.
In such cases, the OpenVZ software allows you to disable a Container, thus, making it impossible to start the Container once it was stopped. For example, you can execute the following command to disable Container 101 residing on your Hardware Node:
# vzctl set 101 --disabled yes --save
Note: This option makes no sense without the --save flag, so you have to supply it.
|
After the Container stopping, the Container user will not be able to start it again until you enable this Container again by passing the --disabled no
option to vzctl set
. You can also use the --force
option to start any disabled Container. For example:
# vzctl start 103 Container start disabled # vzctl start 103 --force Starting container ... Container is mounted Adding IP address(es): 192.168.16.3 Setting CPU units: 1000 Configure meminfo: 65536 Container start in progress...
Suspending Container[edit]
OpenVZ allows you to suspend any running Container on the Hardware Node by saving its current state to a special dump file. Later on, you can resume the Container and get it in the same state the Container was at the time of its suspending.
In OpenVZ-based systems, you can use the vzctl chkpnt
command to save the current state of a Container. For example, you can issue the following command to suspend Container 101:
# vzctl chkpnt 101 Setting up checkpoint... suspend... dump... kill... Container is unmounted Checkpointing completed succesfully
During the command execution, the /vz/dump/Dump.101
file containing the entire state of Container 101 is created and the Container itself is stopped.
Note: You can set another directory to store dump files for your Containers by changing the value of the DUMPDIR parameter in the OpenVZ global file. Detailed information on the OpenVZ global file and the parameters you can specify in it is provided in the vz.conf(5).
At any time, you can resume Container 101 by executing the following command:
# vzctl restore 101 Restoring container ... Starting container ... Container is mounted undump... Adding IP address(es): 192.168.16.3 Setting CPU units: 1000 Configure meminfo: 65536 resume... Container start in progress... Restoring completed succesfully
The Container state is restored from the /vz/dump/Dump.101
file on the Node. Upon the restoration completion, any applications that were running inside Container 101 at the time of its suspending will be running and the information content will be the same as it was when the Container was suspended.
While working with dump files, please keep in mind the following:
- You can restore the Container dump file on the Source Node, i.e. on the Node where this Container was running before its dumping, or transfer the dump file to another Node and restore it there.
- You cannot change settings of the suspended Container.
Running Commands in Container[edit]
Usually, a Container administrator logs in to the Container via network and executes any commands in the Container as on any other Linux box. However, you might need to execute commands inside Containers bypassing the normal login sequence. This can be helpful if:
- You do not know the Container login information, and you need to run some diagnosis commands inside the Container in order to verify that it is operational.
- Network access is absent for a Container. For example, the Container administrator might have accidentally applied incorrect firewalling rules or stopped the SSH daemon.
The OpenVZ software allows you to execute commands in a Container in these cases. Use the vzctl exec
command for running a command inside the Container with the given ID. The session below illustrates the situation when the SSH daemon is not started:
# vzctl exec 103 /etc/init.d/sshd status openssh-daemon is stopped # vzctl exec 103 /etc/init.d/sshd start Starting sshd: [ OK ] # vzctl exec 103 /etc/init.d/sshd status openssh-daemon (pid 9899) is running...
Now Container users can log in to the Container via SSH (assuming that networking and firewall are not misconfigured).
When executing commands inside a Container from shell scripts, use the vzctl exec2
command. It has the same syntax as vzctl exec
but returns the exit code of the command being executed instead of the exit code of vzctl
itself. You can check the exit code to find out whether the command has completed successfully.
If you wish to execute a command in all running Containers, you can use the following script:
# for CT in $(vzlist -H -o ctid); do echo "== CT $CT =="; vzctl exec $CT command; done
where command
is the command to be executed in all the running Containers. For example:
# for CT in $(vzlist -H -o ctid); do echo "== CT $CT =="; vzctl exec $CT uptime; done == CT 103 == 15:17:19 up 13 min, 0 users, load average: 0.00, 0.00, 0.00 == CT 123123123 == 15:17:19 up 22:00, 0 users, load average: 0.00, 0.00, 0.00
Managing Resources[edit]
The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary.
What are Resource Control Parameters?[edit]
The system administrator controls the resources available to a Container through a set of resource management parameters. All these parameters are defined either in the OpenVZ global configuration file (/etc/vz/vz.conf
), or in the respective CT configuration files (/etc/vz/conf/CTID.conf
), or in both. You can set them by manually editing the corresponding configuration files, or by using the OpenVZ command-line utilities. These parameters can be divided into the disk, network, CPU, and system categories. The table below summarizes these groups:
Group | Description | Parameter names | Explained in |
---|---|---|---|
Disk | This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings. | DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT, IOPRIO | #Managing Disk Quotas |
CPU | This group of parameters defines the CPU time different CTs are guaranteed to receive. | VE0CPUUNITS, CPUUNITS | #Managing CPU Share |
System | This group of parameters defines various aspects of using system memory, TCP sockets, IP packets and like parameters by different CTs. | avnumproc, numproc, numtcpsock, numothersock, vmguarpages, kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, lockedpages, shmpages, privvmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent | #Managing System Parameters |
Managing Disk Quotas[edit]
This section explains what disk quotas are, defines disk quota parameters, and describes how to perform disk quota related operations:
- Turning on and off per-CT (first-level) disk quotas;
- Setting up first-level disk quota parameters for a Container;
- Turning on and off per-user and per-group (second-level) disk quotas inside a Container;
- Setting up second-level quotas for a user or for a group;
- Checking disk quota statistics;
- Cleaning up Containers in certain cases.
What are Disk Quotas?[edit]
Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Container administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.
By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.
The disk quota block size in OpenVZ is always 1024 bytes. It may differ from the block size of the underlying file system.
OpenVZ keeps quota usage statistics and limits in /var/vzquota/quota.ctid
— a special quota file. The quota file has a special flag indicating whether the file is “dirty”. The file becomes dirty when its contents become inconsistent with the real CT usage. This means that when the disk space or inodes usage changes during the CT operation, these statistics are not automatically synchronized with the quota file, the file just gets the “dirty” flag. They are synchronized only when the CT is stopped or when the HN is shut down. After synchronization, the “dirty” flag is removed. If the Hardware Node has been incorrectly brought down (for example, the power switch was hit), the file remains “dirty”, and the quota is re-initialized on the next CT startup. This operation may noticeably increase the Node startup time. Thus, it is highly recommended to shut down the Hardware Node properly.
Disk Quota Parameters[edit]
The table below summarizes the disk quota parameters that you can control. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV).
Parameter | Description | File |
---|---|---|
disk_quota | Indicates whether first-level quotas are on or off for all CTs or for a separate CT. | GV |
diskspace | Total size of disk space the CT may consume, in 1-Kb blocks. | V |
diskinodes | Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. | V |
quotatime | The grace period for the disk quota overusage defined in seconds. The Container is allowed to temporarily exceed its quota soft limits for no more than the QUOTATIME period. | V |
quotaugidlimit | Number of user/group IDs allowed for the CT internal disk quota. If set to 0, the UID/GID quota will not be enabled. | V |
Turning On and Off Per-CT Disk Quotas[edit]
The parameter that defines whether to use first-level disk quotas is DISK_QUOTA
in the OpenVZ global configuration file (/etc/vz/vz.conf
). By setting it to “no”, you will disable OpenVZ quotas completely.
This parameter can be specified in the Container configuration file (/etc/vz/conf/ctid.conf
) as well. In this case its value will take precedence of the one specified in the global configuration file. If you intend to have a mixture of Containers with quotas turned on and off, it is recommended to set the DISK_QUOTA
value to “yes” in the global configuration file and to “no” in the configuration file of that CT which does not need quotas.
The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:
[checking that quota is on] # grep DISK_QUOTA /etc/vz/vz.conf DISK_QUOTA=yes [checking available space on /vz partition] # df /vz Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda2 8957295 1421982 7023242 17% /vz [editing CT configuration file to add DISK_QUOTA=no] # vi /etc/vz/conf/101.conf [checking that quota is off for CT 101] # grep DISK_QUOTA /etc/vz/conf/101.conf DISK_QUOTA=no # vzctl start 101 Starting CT ... CT is mounted Adding IP address(es): 192.168.1.101 Hostname for CT set: vps101.my.org CT start in progress... # vzctl exec 101 df Filesystem 1k-blocks Used Available Use% Mounted on simfs 8282373 747060 7023242 10% /
As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.
Setting Up Per-CT Disk Quota Parameters[edit]
Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file:
- DISKSPACE
- Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter.
- DISKINODES
- Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter.
- QUOTATIME
- The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter.
The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line.
The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:
# vzctl set 101 --diskspace 1000000:1100000 --save Saved parameters for CT 101 # vzctl set 101 --diskinodes 90000:91000 --save Saved parameters for CT 101 # vzctl set 101 --quotatime 600 --save Saved parameters for CT 101 # vzctl exec 101 df Filesystem 1k-blocks Used Available Use% Mounted on simfs 1000000 747066 252934 75% / # vzctl exec 101 stat -f / File: "/" ID: 0 Namelen: 255 Type: ext2/ext3 Blocks: Total: 1000000 Free: 252934 Available: 252934 Size: 1024 Inodes: Total: 90000 Free: 9594
It is possible to change the first-level disk quota parameters for a running Container. The changes will take effect immediately. If you do not want your changes to persist till the next Container startup, do not use the –-save
switch.
Turning On and Off Second-Level Quotas for Container[edit]
The parameter that controls the second-level disk quotas is QUOTAUGIDLIMIT
in the Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.
If you assign a non-zero value to the QUOTAUGIDLIMIT
parameter, this action brings about the two following results:
- Second-level (per-user and per-group) disk quotas are enabled for the given Container;
- The value that you assign to this parameter will be the limit for the number of file owners and groups of this Container, including Linux system users. Note that you will theoretically be able to create extra users of this Container, but if the number of file owners inside the Container has already reached the limit, these users will not be able to own files.
Enabling per-user/group quotas for a Container requires restarting the Container. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the Container /etc/passwd
and /etc/group
files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.
The session below turns on second-level quotas for Container 101:
# vzctl set 101 --quotaugidlimit 100 --save Unable to apply new quota values: ugid quota not initialized Saved parameters for CT 101 # vzctl restart 101 Restarting container Stopping container ... Container was stopped Container is unmounted Starting container ... Container is mounted Adding IP address(es): 192.168.16.123 Setting CPU units: 1000 Configure meminfo: 65536 File resolv.conf was modified Container start in progress...
Setting Up Second-Level Disk Quota Parameters[edit]
In order to work with disk quotas inside a Container, you should have standard quota tools installed:
# vzctl exec 101 rpm -q quota quota-3.12-5
This command shows that the quota package is installed into the Container. Use the utilities from this package (as is prescribed in your Linux manual) to set OpenVZ second-level quotas for the given CT. For example:
# ssh ve101 root@ve101's password: Last login: Sat Jul 5 00:37:07 2003 from 10.100.40.18 [root@ve101 root]# edquota root Disk quotas for user root (uid 0): Filesystem blocks soft hard inodes soft hard /dev/simfs 38216 50000 60000 45454 70000 70000 [root@ve101 root]# repquota -a *** Report for user quotas on device /dev/simfs Block grace time: 00:00; Inode grace time: 00:00 Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root — 38218 50000 60000 45453 70000 70000 [the rest of repquota output is skipped] [root@ve101 root]# dd if=/dev/zero of=test dd: writing to `test': Disk quota exceeded 23473+0 records in 23472+0 records out [root@ve101 root]# repquota -a *** Report for user quotas on device /dev/simfs Block grace time: 00:00; Inode grace time: 00:00 Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root +- 50001 50000 60000 none 45454 70000 70000 [the rest of repquota output is skipped]
The above example shows the session when the root user has the disk space quota set to the hard limit of 60,000 1Kb blocks and to the soft limit of 50,000 1Kb blocks; both hard and soft limits for the number of inodes are set to 70,000.
It is also possible to set the grace period separately for block limits and inodes limits with the help of the /usr/sbin/setquota command. For more information on using the utilities from the quota package, please consult the system administration guide shipped with your Linux distribution or manual pages included in the package.
Checking Quota Status[edit]
As the Hardware Node system administrator, you can check the quota status for any Container with the vzquota stat
and vzquota show
commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at /var/vzquota/quota.CTID
) and shall be used for stopped Containers. Both commands have the same output format.
The session below shows a partial output of CT 101 quota statistics:
# vzquota stat 101 –t resource usage softlimit hardlimit grace 1k-blocks 38281 1000000 1100000 inodes 45703 90000 91000 User/group quota: on,active Ugids: loaded 34, total 34, limit 100 Ugid limit was exceeded: no User/group grace times and quotafile flags: type block_exp_time inode_exp_time dqi_flags user 0h group 0h User/group objects: ID type resource usage softlimit hardlimit grace status 0 user 1k-blocks 38220 50000 60000 loaded 0 user inodes 45453 70000 70000 loaded [the rest is skipped]
The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.
If you do not need the second-level quota statistics, you can omit the –t
switch from the vzquota
command line.
Configuring Container Disk I/O Priority Level[edit]
OpenVZ provides you with the capability of configuring the Container disk I/O (input/output) priority level. The higher the Container I/O priority level, the more time the Container will get for its disk I/O activities as compared to the other Containers on the Hardware Node. By default, any Container on the Hardware Node has the I/O priority level set to 4. However, you can change the current Container I/O priority level in the range from 0 to 7 using the --ioprio
option of the vzctl set
command. For example, you can issue the following command to set the I/O priority of Container 101 to 6:
# vzctl set 101 --ioprio 6 --save Saved parameters for Container 101
To check the I/O priority level currently applied to Container 101, you can execute the following command:
# grep IOPRIO /etc/vz/conf/101.conf IOPRIO="6"
The command output shows that the current I/O priority level is set to 6.
Managing Container CPU resources[edit]
The current section explains the CPU resource parameters that you can configure and monitor for each Container.
The table below provides the name and the description for the CPU parameters. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).
[edit]
The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the cpulimit
parameter is not defined.
Note: The CPU time shares and limits are calculated on the basis of a one-second period. Thus, for example, if a Container is not allowed to receive more than 50% of the CPU time, it will be able to receive no more than half a second each second. |
To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:
# vzcpucheck Current CPU utilization: 5166 Power of the node: 73072.5
The output of this command displays the total number of the so-called CPU units consumed by all running Containers and Hardware Node processes. This number is calculated by OpenVZ with the help of a special algorithm. The above example illustrates the situation when the Hardware Node is underused. In other words, the running Containers receive more CPU time than was guaranteed to them.
In the following example, Container 102 is guaranteed to receive about 2% of the CPU time even if the Hardware Node is fully used, or in other words, if the current CPU utilization equals the power of the Node. Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded:
# vzctl set 102 --cpuunits 1500 --cpulimit 4 --save Saved parameters for CT 102 # vzctl start 102 Starting CT … CT is mounted Adding IP address(es): 192.168.1.102 CT start in progress… # vzcpucheck Current CPU utilization: 6667 Power of the node: 73072.5
Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.
Note: To set the --cpuunits parameter for the Hardware Node, you should indicate 0 as the Container ID (e.g. vzctl set 0 --cpuunits 5000 --save ).
|
Configuring Number of CPUs Inside Container[edit]
If your Hardware Node has more than one physical processor installed, you can control the number of CPUs which will be used to handle the processes running inside separate Containers. By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node. However, you can modify the number of physical CPUs which will be simultaneously available to a Container using the --cpus
option of the vzctl set
command. For example, if your Hardware Node has 4 physical processors installed, i.e. any Container on the Node can make use of these 4 processors, you can set the processes inside Container 101 to be run on 2 CPUs only by issuing the following command:
# vzctl set 101 --cpus 2 --save
Note: The number of CPUs to be set for a Container must not exceed the number of physical CPUs installed on the Hardware Node. In this case the 'physical CPUs' notation designates the number of CPUs the OpenVZ kernel is aware of (you can view this CPU number using the cat /proc/cpuinfo command on the Hardware Node).
|
You can check if the number of CPUs has been successfully changed by running the cat /proc/cpuinfo command inside your Container. Assuming that you have set two physical processors to handle the processes inside Container 101, your command output may look as follows:
# vzctl exec 101 cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 2.80GHz stepping : 1 cpu MHz : 2793.581 cache size : 1024 KB ... processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 2.80GHz stepping : 1 cpu MHz : 2793.581 cache size : 1024 KB ...
The output shows that Container 101 is currently bound to only two processors on the Hardware Node instead of 4 available for the other Containers on this Node. It means that, from this point on, the processes of Container 101 will be simultaneously executed on no more than 2 physical CPUs while the other Containers on the Node will continue consuming the CPU time of all 4 Hardware Node processors, if needed. Please note also that the physical CPUs proper of Container 101 might not remain the same during the Container operation; they might change for load balancing reasons, the only thing that cannot be changed is their maximal number.
Managing System Parameters[edit]
The resources a Container may allocate are defined by the system resource control parameters. These parameters can be subdivided into the following categories: primary, secondary, and auxiliary parameters. The primary parameters are the start point for creating a Container configuration from scratch. The secondary parameters are dependent on the primary ones and are calculated from them according to a set of constraints. The auxiliary parameters help improve fault isolation among applications in one and the same Container and the way applications handle errors and consume resources. They also help enforce administrative policies on Containers by limiting the resources required by an application and preventing the application to run in the Container.
Listed below are all the system resource control parameters. The parameters starting with «num» are measured in integers. The parameters ending in «buf» or «size» are measured in bytes. The parameters containing «pages» in their names are measured in 4096-byte pages (IA32 architecture). The File column indicates that all the system parameters are defined in the corresponding CT configuration files (V).
Parameter | Description | File |
---|---|---|
avnumproc | The average number of processes and threads. | V |
numproc | The maximal number of processes and threads the CT may create. | V |
numtcpsock | The number of TCP sockets (PF_INET family, SOCK_STREAM type). This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel. | V |
numothersock | The number of sockets other than TCP ones. Local (UNIX-domain) sockets are used for communications inside the system. UDP sockets are used, for example, for Domain Name Service (DNS) queries. UDP and other sockets may also be used in some very specialized applications (SNMP agents and others). | V |
vmguarpages | The memory allocation guarantee, in pages (one page is 4 Kb). CT applications are guaranteed to be able to allocate additional memory so long as the amount of memory accounted as privvmpages (see the auxiliary parameters) does not exceed the configured barrier of the vmguarpages parameter. Above the barrier, additional memory allocation is not guaranteed and may fail in case of overall memory shortage. | V |
Parameter | Description | File |
---|---|---|
kmemsize | The size of unswappable kernel memory allocated for the internal kernel structures for the processes of a particular CT. | V |
tcpsndbuf | The total size of send buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data sent from an application to a TCP socket, but not acknowledged by the remote side yet. | V |
tcprcvbuf | The total size of receive buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data received from the remote side, but not read by the local application yet. | V |
othersockbuf | The total size of UNIX-domain socket buffers, UDP, and other datagram protocol send buffers. | V |
dgramrcvbuf | The total size of receive buffers of UDP and other datagram protocols. | V |
oomguarpages | The out-of-memory guarantee, in pages (one page is 4 Kb). Any CT process will not be killed even in case of heavy memory shortage if the current memory consumption (including both physical memory and swap) does not reach the oomguarpages barrier. | V |
Parameter | Description | File |
---|---|---|
lockedpages | The memory not allowed to be swapped out (locked with the mlock() system call), in pages. | V |
shmpages | The total size of shared memory (including IPC, shared anonymous mappings and tmpfs objects) allocated by the processes of a particular CT, in pages. | V |
privvmpages | The size of private (or potentially private) memory allocated by an application. The memory that is always shared among different applications is not included in this resource parameter. | V |
numfile | The number of files opened by all CT processes. | V |
numflock | The number of file locks created by all CT processes. | V |
numpty | The number of pseudo-terminals, such as an ssh session, the screen or xterm applications, etc. | V |
numsiginfo | The number of siginfo structures (essentially, this parameter limits the size of the signal delivery queue). | V |
dcachesize | The total size of dentry and inode structures locked in the memory. | V |
physpages | The total size of RAM used by the CT processes. This is an accounting-only parameter currently. It shows the usage of RAM by the CT. For the memory pages used by several different CTs (mappings of shared libraries, for example), only the corresponding fraction of a page is charged to each CT. The sum of the physpages usage for all CTs corresponds to the total number of pages used in the system by all the accounted users. | V |
numiptent | The number of IP packet filtering entries. | V |
You can edit any of these parameters in the /etc/vz/conf/CTID.conf
file of the corresponding Container by means of your favorite text editor (for example, vi or emacs), or by running the vzctl set
command. For example:
# vzctl set 101 --kmemsize 2211840:2359296 --save Saved parameters for CT 101
Monitoring System Resources Consumption[edit]
It is possible to check the system resource control parameters statistics from within a Container. The primary use of these statistics is to understand what particular resource has limits preventing an application to start. Moreover, these statistics report the current and maximal resources consumption for the running Container. This information can be obtained from the /proc/user_beancounters
file.
The output below illustrates a typical session:
vzctl exec 101 cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 101: kmemsize 803866 1246758 2457600 2621440 0 lockedpages 0 0 32 32 0 privvmpages 5611 7709 22528 24576 0 shmpages 39 695 8192 8192 0 dummy 0 0 0 0 0 numproc 16 27 65 65 0 physpages 1011 3113 0 2147483647 0 vmguarpages 0 0 6144 2147483647 0 oomguarpages 2025 3113 6144 2147483647 0 numtcpsock 3 4 80 80 0 numflock 2 4 100 110 0 numpty 0 1 16 16 0 numsiginfo 0 2 256 256 0 tcpsndbuf 0 6684 319488 524288 0 tcprcvbuf 0 4456 319488 524288 0 othersockbuf 2228 9688 132096 336896 0 dgramrcvbuf 0 4276 132096 132096 0 numothersock 4 17 80 80 0 dcachesize 78952 108488 524288 548864 0 numfile 194 306 1280 1280 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 0 0 128 128 0
The failcnt column displays the number of unsuccessful attempts to allocate a particular resource. If this value increases after an application fails to start, then the corresponding resource limit is in effect lower than is needed by the application.
The held column displays the current resource usage, and the maxheld column – the maximal value of the resource consumption for the last accounting period. The meaning of the barrier and limit columns depends on the parameter and is explained in the UBC guide.
Inside a CT, the /proc/user_beancounters
file displays the information on the given CT only, whereas from the Hardware Node this file displays the information on all the CTs.
Monitoring Memory Consumption[edit]
You can monitor a number of memory parameters for the whole Hardware Node and for particular Containers with the help of the vzmemcheck
utility. For example:
# vzmemcheck -v Output values in % veid LowMem LowMem RAM MemSwap MemSwap Alloc Alloc Alloc util commit util util commit util commit limit 101 0.19 1.93 1.23 0.34 1.38 0.42 1.38 4.94 1 0.27 8.69 1.94 0.49 7.19 1.59 2.05 56.54 ---------------------------------------------------------------------- Summary: 0.46 10.62 3.17 0.83 8.57 2.02 3.43 61.48
The –v
option is used to display the memory information for each Container and not for the Hardware Node in general. It is also possible to show the absolute values in megabytes by using the –A
switch. The monitored parameters are (from left to right in the output above) low memory utilization, low memory commitment, RAM utilization, memory+swap utilization, memory+swap commitment, allocated memory utilization, allocated memory commitment, allocated memory limit.
To understand these parameters, let us first draw the distinction between utilization and commitment levels.
- Utilization level is the amount of resources consumed by CTs at the given time. In general, low utilization values mean that the system is under-utilized. Often, it means that the system is capable of supporting more Containers if the existing CTs continue to maintain the same load and resource consumption level. High utilization values (in general, more than 1, or 100%) mean that the system is overloaded and the service level of the Containers is degraded.
- Commitment level shows how much resources are “promised” to the existing Containers. Low commitment levels mean that the system is capable of supporting more Containers. Commitment levels more than 1 mean that the Containers are promised more resources than the system has, and the system is said to be overcommitted. If the system runs a lot of CTs, it is usually acceptable to have some overcommitment because it is unlikely that all Containers will request resources at one and the same time. However, very high commitment levels will cause CTs to fail to allocate and use the resources promised to them and may hurt system stability.
There follows an overview of resources checked up by the vzmemcheck
utility. Their complete description is provided in the UBC guide.
The low memory is the most important RAM area representing the part of memory residing at lower addresses and directly accessible by the kernel. In OpenVZ, the size of the “low” memory area is limited to 832 MB in the UP (uniprocessor) and SMP versions of the kernel, and to 3.6 GB in the Enterprise version of the kernel. If the total size of the computer RAM is less than the limit (832 MB or 3.6 GB, respectively), then the actual size of the “low” memory area is equal to the total memory size.
The union of RAM and swap space is the main computer resource determining the amount of memory available to applications. If the total size of memory used by applications exceeds the RAM size, the Linux kernel moves some data to swap and loads it back when the application needs it. More frequently used data tends to stay in RAM, less frequently used data spends more time in swap. Swap-in and swap-out activity reduces the system performance to some extent. However, if this activity is not excessive, the performance decrease is not very noticeable. On the other hand, the benefits of using swap space are quite big, allowing to increase the number of Containers in the system by 2 times. Swap space is essential for handling system load bursts. A system with enough swap space just slows down at high load bursts, whereas a system without swap space reacts to high load bursts by refusing memory allocations (causing applications to refuse to accept clients or terminate) and directly killing some applications. Additionally, the presence of swap space helps the system better balance memory and move data between the low memory area and the rest of the RAM.
Allocated memory is a more “virtual” system resource than the RAM or RAM plus swap space. Applications may allocate memory but start to use it only later, and only then will the amount of free physical memory really decrease. The sum of the sizes of memory allocated in all Containers is only the estimation of how much physical memory will be used if all applications claim the allocated memory. The memory available for allocation can be not only used (the Alloc util column) or promised (the Alloc commit column), but also limited (applications will not be able to allocate more resources than is indicated in the Alloc limit column).
Managing CT Resources Configuration[edit]
Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways:
- Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the #Creating and Configuring New Container section). They are stored in (
/etc/vz/conf/
and have theve‑name.conf-sample
mask. Currently, the following configuration sample files are provided:- light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters;
- basic – to be used for common CTs.
Note: Configuration sample files cannot contain spaces in their names. Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:
# vzctl set 101 --applyconfig light --save
This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file. - Using OpenVZ specialized utilities for preparing configuration files in their entirety. The tasks these utilities perform are described in the following subsections of this section.
- The direct creating and editing of the corresponding configuration file (
/etc/vz/conf/CTID.conf
). This can be performed either with the help of any text editor. The instructions on how to edit CT configuration files directly are provided in the four preceding sections. In this case you have to edit all the configuration parameters separately, one by one.
Splitting Hardware Node Into Equal Pieces[edit]
It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below:
# cd /etc/vz/conf # vzsplit -n 20 -f vps.mytest Config /etc/vz/conf/ve-vps.mytest.conf-sample was created # vzcfgvalidate /etc/vz/conf/ve-vps.mytest.conf-sample Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 (currently, 126391) Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622)
Note that the configuration produced depends on the given Hardware Node resources. Therefore, it is important to validate the resulted configuration file before trying to use it, which is done with the help of the vzcfgvalidate
utility.
The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above:
# vzctl create 101 --ostemplate centos-5 --config vps.mytest Creating CT private area: /vz/private/101 CT private area was created # vzctl set 101 --ipadd 192.168.1.101 --save Saved parameters for CT 101 # vzctl start 101 Starting CT ... CT is mounted Adding IP address(es): 192.168.1.101 CT start in progress... # vzcalc 101 Resource Current(%) Promised(%) Max(%) Memory 0.53 1.90 6.44
As is seen, if Containers use all the resources guaranteed to them, then around 20 CTs can be simultaneously running. However, taking into account the Promised column output, it is safe to run 40–50 such Containers on this Hardware Node.
Validating Container Configuration[edit]
The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis.
Here is how to validate a CT configuration:
# vzcfgvalidate /etc/vz/conf/101.conf Error: kmemsize.bar should be > 1835008 (currently, 25000) Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536) Recommendation: othersockbuf.bar should be > 132096 (currently, 122880)
The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:
Recommendation | This is a suggestion, which is not critical for Container or Hardware Node operations. The configuration is valid in general; however, if the system has enough memory, it is better to increase the settings as advised. |
---|---|
Warning | A constraint is not satisfied, and the configuration is invalid. The Container applications may not have optimal performance or may fail in an ungraceful way. |
Error | An important constraint is not satisfied, and the configuration is invalid. The Container applications have increased chances to fail unexpectedly, to be terminated, or to hang. |
Manual adjustment[edit]
To fix errors or warnings reported by vzcfgvalidate
, adjust the parameters accordingly and re-run the vzcfgvalidate
.
# vzctl set 101 --kmemsize 2211840:2359296 --save Saved parameters for CT 101 # vzcfgvalidate /etc/vz/conf/101.conf Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536) Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)t Validation completed: success
In the scenario above, the first run of the vzcfgvalidate
utility found a critical error for the kmemsize
parameter value. After setting reasonable values for kmemsize
, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration.
Automatic adjustment[edit]
FIXME: vzcfgvalidate -r|-i
Applying New Configuration Sample to Container[edit]
The OpenVZ software enables you to change the configuration sample file a Container is based on and, thus, to modify all the resources the Container may consume and/or allocate at once. For example, if Container 101 is currently based on the light
configuration sample and you are planning to run some more heavy-weight application inside the Container, you may wish to apply the basic
sample to it instead of light
, which will automatically adjust the necessary Container resource parameters. To this effect, you can execute the following command on the Node:
# vzctl set 101 --applyconfig basic --save Saved parameters for CT 101
This command reads the resource parameters from the ve-basic.conf-sample
file located in the /etc/vz/conf
directory and applies them one by one to Container 101.
When applying new configuration samples to Containers, please keep in mind the following:
- All Container sample files are located in the /etc/vz/conf directory on the Hardware Node and are named according to the following pattern:
ve-name.conf-sample
. You should specify only thename
part of the corresponding sample name after the--applyconfig
option (basic
in the example above). - The
--applyconfig
option applies all the parameters from the specified sample file to the given Container, except for theOSTEMPLATE
,VE_ROOT
,VE_PRIVATE
,HOSTNAME
,IP_ADDRESS
,TEMPLATE
,NETIF
parameters (if they exist in the sample file). - You may need to restart your Container depending on the fact whether the changes for the selected parameters can be set on the fly or not. If some parameters could not be configured on the fly, you will be presented with the corresponding message informing you of this fact.