User Guide/Managing Resources

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
Warning.svg Warning: This User's Guide is still in development
User's Guide
Preface
OpenVZ Philosophy
Installation and Preliminary Operations
Operations on Containers
Managing Resources
Advanced Tasks
Troubleshooting
Reference

The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary.

What are Resource Control Parameters?[edit]

The system administrator controls the resources available to a Container through a set of resource management parameters. All these parameters are defined either in the OpenVZ global configuration file (/etc/vz/vz.conf), or in the respective CT configuration files (/etc/vz/conf/CTID.conf), or in both. You can set them by manually editing the corresponding configuration files, or by using the OpenVZ command-line utilities. These parameters can be divided into the disk, network, CPU, and system categories. The table below summarizes these groups:

Group Description Parameter names Explained in
Disk This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings. DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT, IOPRIO #Managing Disk Quotas
CPU This group of parameters defines the CPU time different CTs are guaranteed to receive. VE0CPUUNITS, CPUUNITS #Managing CPU Share
System This group of parameters defines various aspects of using system memory, TCP sockets, IP packets and like parameters by different CTs. avnumproc, numproc, numtcpsock, numothersock, vmguarpages, kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, lockedpages, shmpages, privvmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent #Managing System Parameters

Managing Disk Quotas[edit]

This section explains what disk quotas are, defines disk quota parameters, and describes how to perform disk quota related operations:

  • Turning on and off per-CT (first-level) disk quotas;
  • Setting up first-level disk quota parameters for a Container;
  • Turning on and off per-user and per-group (second-level) disk quotas inside a Container;
  • Setting up second-level quotas for a user or for a group;
  • Checking disk quota statistics;
  • Cleaning up Containers in certain cases.

What are Disk Quotas?[edit]

Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Container administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.

By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.

The disk quota block size in OpenVZ is always 1024 bytes. It may differ from the block size of the underlying file system.

OpenVZ keeps quota usage statistics and limits in /var/vzquota/quota.ctid — a special quota file. The quota file has a special flag indicating whether the file is “dirty”. The file becomes dirty when its contents become inconsistent with the real CT usage. This means that when the disk space or inodes usage changes during the CT operation, these statistics are not automatically synchronized with the quota file, the file just gets the “dirty” flag. They are synchronized only when the CT is stopped or when the HN is shut down. After synchronization, the “dirty” flag is removed. If the Hardware Node has been incorrectly brought down (for example, the power switch was hit), the file remains “dirty”, and the quota is re-initialized on the next CT startup. This operation may noticeably increase the Node startup time. Thus, it is highly recommended to shut down the Hardware Node properly.

Disk Quota Parameters[edit]

The table below summarizes the disk quota parameters that you can control. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV).

Parameter Description File
disk_quota Indicates whether first-level quotas are on or off for all CTs or for a separate CT. GV
diskspace Total size of disk space the CT may consume, in 1-Kb blocks. V
diskinodes Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. V
quotatime The grace period for the disk quota overusage defined in seconds. The Container is allowed to temporarily exceed its quota soft limits for no more than the QUOTATIME period. V
quotaugidlimit Number of user/group IDs allowed for the CT internal disk quota. If set to 0, the UID/GID quota will not be enabled. V

Turning On and Off Per-CT Disk Quotas[edit]

The parameter that defines whether to use first-level disk quotas is DISK_QUOTA in the OpenVZ global configuration file (/etc/vz/vz.conf). By setting it to “no”, you will disable OpenVZ quotas completely.

This parameter can be specified in the Container configuration file (/etc/vz/conf/ctid.conf) as well. In this case its value will take precedence of the one specified in the global configuration file. If you intend to have a mixture of Containers with quotas turned on and off, it is recommended to set the DISK_QUOTA value to “yes” in the global configuration file and to “no” in the configuration file of that CT which does not need quotas.

The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:

[checking that quota is on]
# grep DISK_QUOTA /etc/vz/vz.conf
DISK_QUOTA=yes

[checking available space on /vz partition]
# df /vz
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/sda2              8957295   1421982   7023242  17% /vz

[editing CT configuration file to add DISK_QUOTA=no]
# vi /etc/vz/conf/101.conf

[checking that quota is off for CT 101]
# grep DISK_QUOTA /etc/vz/conf/101.conf
DISK_QUOTA=no

# vzctl start 101
Starting CT ...
CT is mounted
Adding IP address(es): 192.168.1.101
Hostname for CT set: vps101.my.org
CT start in progress...
# vzctl exec 101 df
Filesystem           1k-blocks      Used Available Use% Mounted on
simfs                   8282373    747060   7023242  10% /

As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.

Yellowpin.svg Note: You must change the DISK_QUOTA parameter in the global OpenVZ configuration file only when all Containers are stopped, and in the CT configuration file – only when the corresponding CT is stopped. Otherwise, the configuration may prove inconsistent with the real quota usage, and this can interfere with the normal Hardware Node operation.

Setting Up Per-CT Disk Quota Parameters[edit]

Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file:

DISKSPACE
Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter.
DISKINODES
Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter.
QUOTATIME
The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter.

The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line.

The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:

# vzctl set 101 --diskspace 1000000:1100000 --save
Saved parameters for CT 101
# vzctl set 101 --diskinodes 90000:91000 --save
Saved parameters for CT 101
# vzctl set 101 --quotatime 600 --save
Saved parameters for CT 101
# vzctl exec 101 df
Filesystem           1k-blocks      Used Available Use% Mounted on
simfs                  1000000    747066    252934  75% /
# vzctl exec 101 stat -f /
 File: "/"
   ID: 0        Namelen: 255    Type: ext2/ext3
Blocks: Total: 1000000   Free: 252934   Available: 252934   Size: 1024
Inodes: Total: 90000     Free: 9594

It is possible to change the first-level disk quota parameters for a running Container. The changes will take effect immediately. If you do not want your changes to persist till the next Container startup, do not use the –-save switch.

Turning On and Off Second-Level Quotas for Container[edit]

The parameter that controls the second-level disk quotas is QUOTAUGIDLIMIT in the Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.

If you assign a non-zero value to the QUOTAUGIDLIMIT parameter, this action brings about the two following results:

  1. Second-level (per-user and per-group) disk quotas are enabled for the given Container;
  2. The value that you assign to this parameter will be the limit for the number of file owners and groups of this Container, including Linux system users. Note that you will theoretically be able to create extra users of this Container, but if the number of file owners inside the Container has already reached the limit, these users will not be able to own files.

Enabling per-user/group quotas for a Container requires restarting the Container. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the Container /etc/passwd and /etc/group files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.

The session below turns on second-level quotas for Container 101:

# vzctl set 101 --quotaugidlimit 100 --save
Unable to apply new quota values: ugid quota not initialized
Saved parameters for CT 101
# vzctl restart 101
Restarting container
Stopping container ...
Container was stopped
Container is unmounted
Starting container ...
Container is mounted
Adding IP address(es): 192.168.16.123
Setting CPU units: 1000
Configure meminfo: 65536
File resolv.conf was modified
Container start in progress...

Setting Up Second-Level Disk Quota Parameters[edit]

In order to work with disk quotas inside a Container, you should have standard quota tools installed:

# vzctl exec 101 rpm -q quota
quota-3.12-5

This command shows that the quota package is installed into the Container. Use the utilities from this package (as is prescribed in your Linux manual) to set OpenVZ second-level quotas for the given CT. For example:

# ssh ve101
root@ve101's password:
Last login: Sat Jul 5 00:37:07 2003 from 10.100.40.18
[root@ve101 root]# edquota root
Disk quotas for user root (uid 0):
  Filesystem   blocks      soft      hard     inodes     soft    hard
  /dev/simfs   38216       50000     60000    45454      70000   70000
[root@ve101 root]# repquota -a
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      —   38218   50000   60000          45453 70000 70000
[the rest of repquota output is skipped]

[root@ve101 root]# dd if=/dev/zero of=test
dd: writing to `test': Disk quota exceeded
23473+0 records in
23472+0 records out
[root@ve101 root]# repquota -a
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      +-   50001   50000   60000   none   45454 70000 70000
[the rest of repquota output is skipped]

The above example shows the session when the root user has the disk space quota set to the hard limit of 60,000 1Kb blocks and to the soft limit of 50,000 1Kb blocks; both hard and soft limits for the number of inodes are set to 70,000.

It is also possible to set the grace period separately for block limits and inodes limits with the help of the /usr/sbin/setquota command. For more information on using the utilities from the quota package, please consult the system administration guide shipped with your Linux distribution or manual pages included in the package.

Checking Quota Status[edit]

As the Hardware Node system administrator, you can check the quota status for any Container with the vzquota stat and vzquota show commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at /var/vzquota/quota.CTID) and shall be used for stopped Containers. Both commands have the same output format.

The session below shows a partial output of CT 101 quota statistics:

# vzquota stat 101 –t

   resource          usage       softlimit      hardlimit    grace
  1k-blocks          38281         1000000        1100000
     inodes          45703           90000          91000
User/group quota: on,active
Ugids: loaded 34, total 34, limit 100
Ugid limit was exceeded: no

User/group grace times and quotafile flags:
 type block_exp_time inode_exp_time  dqi_flags
 user                                       0h
group                                       0h

User/group objects:
ID    type   resource    usage   softlimit   hardlimit    grace status
0     user  1k-blocks    38220       50000       60000          loaded
0     user     inodes    45453       70000       70000          loaded
[the rest is skipped]

The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.

If you do not need the second-level quota statistics, you can omit the –t switch from the vzquota command line.

Configuring Container Disk I/O Priority Level[edit]

OpenVZ provides you with the capability of configuring the Container disk I/O (input/output) priority level. The higher the Container I/O priority level, the more time the Container will get for its disk I/O activities as compared to the other Containers on the Hardware Node. By default, any Container on the Hardware Node has the I/O priority level set to 4. However, you can change the current Container I/O priority level in the range from 0 to 7 using the --ioprio option of the vzctl set command. For example, you can issue the following command to set the I/O priority of Container 101 to 6:

# vzctl set 101 --ioprio 6 --save
Saved parameters for Container 101

To check the I/O priority level currently applied to Container 101, you can execute the following command:

# grep IOPRIO /etc/vz/conf/101.conf
IOPRIO="6"

The command output shows that the current I/O priority level is set to 6.

Managing Container CPU resources[edit]

The current section explains the CPU resource parameters that you can configure and monitor for each Container.

The table below provides the name and the description for the CPU parameters. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).

Parameter Description File
ve0cpuunits This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node. After the Node is up and running, you can redefine the amount of the CPU time allocated to the Node by using the --cpuunits parameter with the vzctl set 0 command. G
cpuunits This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive.
Yellowpin.svg Note: In the current version of OpenVZ, you can also use this parameter to define the CPU time share for the Hardware Node.
V
cpulimit This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed. V
cpus The number of CPUs to be used to handle the processes running inside the corresponding Container. V

Managing CPU Share[edit]

The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the cpulimit parameter is not defined.

Yellowpin.svg Note: The CPU time shares and limits are calculated on the basis of a one-second period. Thus, for example, if a Container is not allowed to receive more than 50% of the CPU time, it will be able to receive no more than half a second each second.

To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:

# vzcpucheck
Current CPU utilization: 5166
Power of the node: 73072.5

The output of this command displays the total number of the so-called CPU units consumed by all running Containers and Hardware Node processes. This number is calculated by OpenVZ with the help of a special algorithm. The above example illustrates the situation when the Hardware Node is underused. In other words, the running Containers receive more CPU time than was guaranteed to them.

In the following example, Container 102 is guaranteed to receive about 2% of the CPU time even if the Hardware Node is fully used, or in other words, if the current CPU utilization equals the power of the Node. Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded:

# vzctl set 102 --cpuunits 1500 --cpulimit 4 --save
Saved parameters for CT 102
# vzctl start 102
Starting CT …
CT is mounted
Adding IP address(es): 192.168.1.102
CT start in progress…
# vzcpucheck
Current CPU utilization: 6667
Power of the node: 73072.5

Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.

Yellowpin.svg Note: To set the --cpuunits parameter for the Hardware Node, you should indicate 0 as the Container ID (e.g. vzctl set 0 --cpuunits 5000 --save).

Configuring Number of CPUs Inside Container[edit]

If your Hardware Node has more than one physical processor installed, you can control the number of CPUs which will be used to handle the processes running inside separate Containers. By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node. However, you can modify the number of physical CPUs which will be simultaneously available to a Container using the --cpus option of the vzctl set command. For example, if your Hardware Node has 4 physical processors installed, i.e. any Container on the Node can make use of these 4 processors, you can set the processes inside Container 101 to be run on 2 CPUs only by issuing the following command:

# vzctl set 101 --cpus 2 --save
Yellowpin.svg Note: The number of CPUs to be set for a Container must not exceed the number of physical CPUs installed on the Hardware Node. In this case the 'physical CPUs' notation designates the number of CPUs the OpenVZ kernel is aware of (you can view this CPU number using the cat /proc/cpuinfo command on the Hardware Node).

You can check if the number of CPUs has been successfully changed by running the cat /proc/cpuinfo command inside your Container. Assuming that you have set two physical processors to handle the processes inside Container 101, your command output may look as follows:

# vzctl exec 101 cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 2.80GHz
stepping : 1
cpu MHz : 2793.581
cache size : 1024 KB
...
processor : 1
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 2.80GHz
stepping : 1
cpu MHz : 2793.581
cache size : 1024 KB
...

The output shows that Container 101 is currently bound to only two processors on the Hardware Node instead of 4 available for the other Containers on this Node. It means that, from this point on, the processes of Container 101 will be simultaneously executed on no more than 2 physical CPUs while the other Containers on the Node will continue consuming the CPU time of all 4 Hardware Node processors, if needed. Please note also that the physical CPUs proper of Container 101 might not remain the same during the Container operation; they might change for load balancing reasons, the only thing that cannot be changed is their maximal number.

Managing System Parameters[edit]

The resources a Container may allocate are defined by the system resource control parameters. These parameters can be subdivided into the following categories: primary, secondary, and auxiliary parameters. The primary parameters are the start point for creating a Container configuration from scratch. The secondary parameters are dependent on the primary ones and are calculated from them according to a set of constraints. The auxiliary parameters help improve fault isolation among applications in one and the same Container and the way applications handle errors and consume resources. They also help enforce administrative policies on Containers by limiting the resources required by an application and preventing the application to run in the Container.

Listed below are all the system resource control parameters. The parameters starting with «num» are measured in integers. The parameters ending in «buf» or «size» are measured in bytes. The parameters containing «pages» in their names are measured in 4096-byte pages (IA32 architecture). The File column indicates that all the system parameters are defined in the corresponding CT configuration files (V).

Primary parameters
Parameter Description File
avnumproc The average number of processes and threads. V
numproc The maximal number of processes and threads the CT may create. V
numtcpsock The number of TCP sockets (PF_INET family, SOCK_STREAM type). This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel. V
numothersock The number of sockets other than TCP ones. Local (UNIX-domain) sockets are used for communications inside the system. UDP sockets are used, for example, for Domain Name Service (DNS) queries. UDP and other sockets may also be used in some very specialized applications (SNMP agents and others). V
vmguarpages The memory allocation guarantee, in pages (one page is 4 Kb). CT applications are guaranteed to be able to allocate additional memory so long as the amount of memory accounted as privvmpages (see the auxiliary parameters) does not exceed the configured barrier of the vmguarpages parameter. Above the barrier, additional memory allocation is not guaranteed and may fail in case of overall memory shortage. V
Secondary parameters
Parameter Description File
kmemsize The size of unswappable kernel memory allocated for the internal kernel structures for the processes of a particular CT. V
tcpsndbuf The total size of send buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data sent from an application to a TCP socket, but not acknowledged by the remote side yet. V
tcprcvbuf The total size of receive buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data received from the remote side, but not read by the local application yet. V
othersockbuf The total size of UNIX-domain socket buffers, UDP, and other datagram protocol send buffers. V
dgramrcvbuf The total size of receive buffers of UDP and other datagram protocols. V
oomguarpages The out-of-memory guarantee, in pages (one page is 4 Kb). Any CT process will not be killed even in case of heavy memory shortage if the current memory consumption (including both physical memory and swap) does not reach the oomguarpages barrier. V
Auxiliary parameters
Parameter Description File
lockedpages The memory not allowed to be swapped out (locked with the mlock() system call), in pages. V
shmpages The total size of shared memory (including IPC, shared anonymous mappings and tmpfs objects) allocated by the processes of a particular CT, in pages. V
privvmpages The size of private (or potentially private) memory allocated by an application. The memory that is always shared among different applications is not included in this resource parameter. V
numfile The number of files opened by all CT processes. V
numflock The number of file locks created by all CT processes. V
numpty The number of pseudo-terminals, such as an ssh session, the screen or xterm applications, etc. V
numsiginfo The number of siginfo structures (essentially, this parameter limits the size of the signal delivery queue). V
dcachesize The total size of dentry and inode structures locked in the memory. V
physpages The total size of RAM used by the CT processes. This is an accounting-only parameter currently. It shows the usage of RAM by the CT. For the memory pages used by several different CTs (mappings of shared libraries, for example), only the corresponding fraction of a page is charged to each CT. The sum of the physpages usage for all CTs corresponds to the total number of pages used in the system by all the accounted users. V
numiptent The number of IP packet filtering entries. V

You can edit any of these parameters in the /etc/vz/conf/CTID.conf file of the corresponding Container by means of your favorite text editor (for example, vi or emacs), or by running the vzctl set command. For example:

# vzctl set 101 --kmemsize 2211840:2359296 --save
Saved parameters for CT 101

Monitoring System Resources Consumption[edit]

It is possible to check the system resource control parameters statistics from within a Container. The primary use of these statistics is to understand what particular resource has limits preventing an application to start. Moreover, these statistics report the current and maximal resources consumption for the running Container. This information can be obtained from the /proc/user_beancounters file.

The output below illustrates a typical session:

vzctl exec 101 cat /proc/user_beancounters
Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
       101: kmemsize         803866    1246758    2457600    2621440          0
            lockedpages           0          0         32         32          0
            privvmpages        5611       7709      22528      24576          0
            shmpages             39        695       8192       8192          0
            dummy                 0          0          0          0          0
            numproc              16         27         65         65          0
            physpages          1011       3113          0 2147483647          0
            vmguarpages           0          0       6144 2147483647          0
            oomguarpages       2025       3113       6144 2147483647          0
            numtcpsock            3          4         80         80          0
            numflock              2          4        100        110          0
            numpty                0          1         16         16          0
            numsiginfo            0          2        256        256          0
            tcpsndbuf             0       6684     319488     524288          0
            tcprcvbuf             0       4456     319488     524288          0
            othersockbuf       2228       9688     132096     336896          0
            dgramrcvbuf           0       4276     132096     132096          0
            numothersock          4         17         80         80          0
            dcachesize        78952     108488     524288     548864          0
            numfile             194        306       1280       1280          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            numiptent             0          0        128        128          0

The failcnt column displays the number of unsuccessful attempts to allocate a particular resource. If this value increases after an application fails to start, then the corresponding resource limit is in effect lower than is needed by the application.

The held column displays the current resource usage, and the maxheld column – the maximal value of the resource consumption for the last accounting period. The meaning of the barrier and limit columns depends on the parameter and is explained in the UBC guide.

Inside a CT, the /proc/user_beancounters file displays the information on the given CT only, whereas from the Hardware Node this file displays the information on all the CTs.

Monitoring Memory Consumption[edit]

You can monitor a number of memory parameters for the whole Hardware Node and for particular Containers with the help of the vzmemcheck utility. For example:

# vzmemcheck -v
Output values in %
veid    LowMem  LowMem     RAM MemSwap MemSwap   Alloc   Alloc   Alloc
          util  commit    util    util  commit    util  commit   limit
101       0.19    1.93    1.23    0.34    1.38    0.42    1.38    4.94
1         0.27    8.69    1.94    0.49    7.19    1.59    2.05   56.54
----------------------------------------------------------------------
Summary:  0.46   10.62    3.17    0.83    8.57    2.02    3.43   61.48

The –v option is used to display the memory information for each Container and not for the Hardware Node in general. It is also possible to show the absolute values in megabytes by using the –A switch. The monitored parameters are (from left to right in the output above) low memory utilization, low memory commitment, RAM utilization, memory+swap utilization, memory+swap commitment, allocated memory utilization, allocated memory commitment, allocated memory limit.

To understand these parameters, let us first draw the distinction between utilization and commitment levels.

  • Utilization level is the amount of resources consumed by CTs at the given time. In general, low utilization values mean that the system is under-utilized. Often, it means that the system is capable of supporting more Containers if the existing CTs continue to maintain the same load and resource consumption level. High utilization values (in general, more than 1, or 100%) mean that the system is overloaded and the service level of the Containers is degraded.
  • Commitment level shows how much resources are “promised” to the existing Containers. Low commitment levels mean that the system is capable of supporting more Containers. Commitment levels more than 1 mean that the Containers are promised more resources than the system has, and the system is said to be overcommitted. If the system runs a lot of CTs, it is usually acceptable to have some overcommitment because it is unlikely that all Containers will request resources at one and the same time. However, very high commitment levels will cause CTs to fail to allocate and use the resources promised to them and may hurt system stability.

There follows an overview of resources checked up by the vzmemcheck utility. Their complete description is provided in the UBC guide.

The low memory is the most important RAM area representing the part of memory residing at lower addresses and directly accessible by the kernel. In OpenVZ, the size of the “low” memory area is limited to 832 MB in the UP (uniprocessor) and SMP versions of the kernel, and to 3.6 GB in the Enterprise version of the kernel. If the total size of the computer RAM is less than the limit (832 MB or 3.6 GB, respectively), then the actual size of the “low” memory area is equal to the total memory size.

The union of RAM and swap space is the main computer resource determining the amount of memory available to applications. If the total size of memory used by applications exceeds the RAM size, the Linux kernel moves some data to swap and loads it back when the application needs it. More frequently used data tends to stay in RAM, less frequently used data spends more time in swap. Swap-in and swap-out activity reduces the system performance to some extent. However, if this activity is not excessive, the performance decrease is not very noticeable. On the other hand, the benefits of using swap space are quite big, allowing to increase the number of Containers in the system by 2 times. Swap space is essential for handling system load bursts. A system with enough swap space just slows down at high load bursts, whereas a system without swap space reacts to high load bursts by refusing memory allocations (causing applications to refuse to accept clients or terminate) and directly killing some applications. Additionally, the presence of swap space helps the system better balance memory and move data between the low memory area and the rest of the RAM.

Allocated memory is a more “virtual” system resource than the RAM or RAM plus swap space. Applications may allocate memory but start to use it only later, and only then will the amount of free physical memory really decrease. The sum of the sizes of memory allocated in all Containers is only the estimation of how much physical memory will be used if all applications claim the allocated memory. The memory available for allocation can be not only used (the Alloc util column) or promised (the Alloc commit column), but also limited (applications will not be able to allocate more resources than is indicated in the Alloc limit column).

Managing CT Resources Configuration[edit]

Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways:

  1. Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the #Creating and Configuring New Container section). They are stored in (/etc/vz/conf/ and have the ve‑name.conf-sample mask. Currently, the following configuration sample files are provided:
    • light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters;
    • basic – to be used for common CTs.
    Yellowpin.svg Note: Configuration sample files cannot contain spaces in their names.

    Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:

    # vzctl set 101 --applyconfig light --save
    
    This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.
  2. Using OpenVZ specialized utilities for preparing configuration files in their entirety. The tasks these utilities perform are described in the following subsections of this section.
  3. The direct creating and editing of the corresponding configuration file (/etc/vz/conf/CTID.conf). This can be performed either with the help of any text editor. The instructions on how to edit CT configuration files directly are provided in the four preceding sections. In this case you have to edit all the configuration parameters separately, one by one.

Splitting Hardware Node Into Equal Pieces[edit]

It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below:

# cd /etc/vz/conf
# vzsplit -n 20 -f vps.mytest
Config /etc/vz/conf/ve-vps.mytest.conf-sample was created
# vzcfgvalidate /etc/vz/conf/ve-vps.mytest.conf-sample
Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 (currently, 126391)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622)

Note that the configuration produced depends on the given Hardware Node resources. Therefore, it is important to validate the resulted configuration file before trying to use it, which is done with the help of the vzcfgvalidate utility.

The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above:

# vzctl create 101 --ostemplate centos-5 --config vps.mytest
Creating CT private area: /vz/private/101
CT private area was created
# vzctl set 101 --ipadd 192.168.1.101 --save
Saved parameters for CT 101
# vzctl start 101
Starting CT ...
CT is mounted
Adding IP address(es): 192.168.1.101
CT start in progress...
# vzcalc 101
Resource     Current(%)  Promised(%)  Max(%)
Memory           0.53       1.90       6.44

As is seen, if Containers use all the resources guaranteed to them, then around 20 CTs can be simultaneously running. However, taking into account the Promised column output, it is safe to run 40–50 such Containers on this Hardware Node.

Validating Container Configuration[edit]

The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis.

Here is how to validate a CT configuration:

# vzcfgvalidate /etc/vz/conf/101.conf
Error: kmemsize.bar should be > 1835008 (currently, 25000)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
Recommendation: othersockbuf.bar should be > 132096 (currently, 122880)

The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:

Recommendation This is a suggestion, which is not critical for Container or Hardware Node operations. The configuration is valid in general; however, if the system has enough memory, it is better to increase the settings as advised.
Warning A constraint is not satisfied, and the configuration is invalid. The Container applications may not have optimal performance or may fail in an ungraceful way.
Error An important constraint is not satisfied, and the configuration is invalid. The Container applications have increased chances to fail unexpectedly, to be terminated, or to hang.

Manual adjustment[edit]

To fix errors or warnings reported by vzcfgvalidate, adjust the parameters accordingly and re-run the vzcfgvalidate.

# vzctl set 101 --kmemsize 2211840:2359296 --save
Saved parameters for CT 101
# vzcfgvalidate  /etc/vz/conf/101.conf
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)t
Validation completed: success

In the scenario above, the first run of the vzcfgvalidate utility found a critical error for the kmemsize parameter value. After setting reasonable values for kmemsize, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration.

Automatic adjustment[edit]

FIXME: vzcfgvalidate -r|-i

Applying New Configuration Sample to Container[edit]

The OpenVZ software enables you to change the configuration sample file a Container is based on and, thus, to modify all the resources the Container may consume and/or allocate at once. For example, if Container 101 is currently based on the light configuration sample and you are planning to run some more heavy-weight application inside the Container, you may wish to apply the basic sample to it instead of light, which will automatically adjust the necessary Container resource parameters. To this effect, you can execute the following command on the Node:

# vzctl set 101 --applyconfig basic --save
Saved parameters for CT 101

This command reads the resource parameters from the ve-basic.conf-sample file located in the /etc/vz/conf directory and applies them one by one to Container 101.

When applying new configuration samples to Containers, please keep in mind the following:

  • All Container sample files are located in the /etc/vz/conf directory on the Hardware Node and are named according to the following pattern: ve-name.conf-sample. You should specify only the name part of the corresponding sample name after the --applyconfig option (basic in the example above).
  • The --applyconfig option applies all the parameters from the specified sample file to the given Container, except for the OSTEMPLATE, VE_ROOT, VE_PRIVATE, HOSTNAME, IP_ADDRESS, TEMPLATE, NETIF parameters (if they exist in the sample file).
  • You may need to restart your Container depending on the fact whether the changes for the selected parameters can be set on the fly or not. If some parameters could not be configured on the fly, you will be presented with the corresponding message informing you of this fact.