★ NEW! Parallels is now offering OpenVZ Maintenance Partnership

UBC systemwide configuration

User Beancounters
Definition
/proc/user_beancounters
/proc/bc/
General information
Units of measurement
VSwap
Parameters description
Primary parameters
numproc, numtcpsock, numothersock, vmguarpages
Secondary parameters
kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, privvmpages
Auxiliary parameters
lockedpages, shmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent, swappages
Internals
User pages accounting
On-demand accounting
UBC consistency
Consistency formulae
System-wide configuration
Configuration examples
Basic
Derived
Intermediate configurations
Tables
List of parameters
Parameter properties
Consistency
Config examples

The UBC consistency check article discussed validation of UBC configuration for a single container. This article discusses checks that the UBC configuration of the whole system is valid.

Configurations where resources allowed for containers exceed system capacity[1] are not valid and dangerous from stability point of view. They may result in abnormal termination of the applications, bad responsiveness of the system and, sometimes, system hangs. Whereas the configuration validation discussed in UBC consistency check addressed application functionality, the validation considered in this section is aimed at security and stability of the whole system.

The best way to make sure that the configuration of the whole system is valid is to run periodic automatic checks, based on the formulae described below. vzmemcheck(8) utility can be helpful in calculations.

 Resource utilization and commitment level

Several resources of the whole system (such as RAM) are discussed below in terms of utilization and commitment level.

Utilization shows the amount of resources consumed by all containers at the given time. In general, low utilization values mean that the system is under-utilized. Often, it means that the system is capable of supporting more containers if the existing containers continue to maintain the same load and resource consumption level. High utilization values (in general, more than 1) mean the the system is overloaded and the service level of the containers is degraded.

Commitment level shows how much resources are “promised” to the existing containers. Low commitment levels mean that the system is capable of supporting more containers. Commitment levels more than 1 mean that the containers are promised more resources than the system has, and in this case the system is said to be overcommitted. If the system runs a lot of containers, it is usually acceptable to have some overcommitment, because it is unlikely that all containers will request resources at the same time. However, higher commitment levels (as discussed below for each resource individually) will cause containerss to experience failures to allocate and use the resources promised to them.

 “Low memory” (x86_32 specific)

Because of specifics of architecture of Intel's x86 processors, the RAM of the computer can't be used uniformly. The most important memory area is so called “low memory”, a part of memory residing at lower addresses and directly accessible by the kernel. For current Linux kernels, the size of low memory area is 832MB (or, if the computer has less RAM than 832MB, the size of the RAM).

Note that 64-bit kernels (i.e. x86_64, ia64 etc.) can access all memory directly and are not using “low memory” area.

 Utilization

The lower bound estimation of low memory utilization is

$\frac{\displaystyle\sum_{all\ containers} (kmemsize_{cur}+allsocketbuf_{cur})} {0.4\cdot\min(RAM\ size, {\rm 832MB})}\rm,$

where

$allsocketbuf=tcprcvbuf+tcpsndbuf+dgramrcvbuf+othersockbuf\rm.$

Utilization of low memory below $1$ is normal. Utilization above $1$ is not safe, and utilization above $2$ is dangerous is very likely to cause bad system responsiveness, application stalls for seconds or more and termination of some applications.

 Commitment level

The commitment level can be computed as

$\frac{\displaystyle\sum_{all\ containers} (kmemsize_{lim}+allsocketbuf_{lim})} {0.4\cdot\min(RAM\ size, {\rm 832MB})}\rm.$

Commitment levels below $1$ are normal. Levels between $1$ and $1.2$ are usually acceptable for systems with about 100 containers. Systems with more containers may have the commitment level increased, to about $1.5$$2$ for 400 containers. Higher commitment levels for this resource are not recommended, because the consequences of exceeding the low memory capacities are severe and affect the whole system and all the containers.

 Total RAM

This subsection discusses usage of the whole RAM and its utilization. Usage of swap space and the sum of used RAM and swap space are discussed below in subsection Memory and swap space.

Current version of OpenVZ can't guarantee availability of certain amount of memory (in opposite to the sum of memory and swap space), so the commitment level is not applicable to the total RAM. Such guarantees will be implemented in future versions.

 Utilization

The amount of RAM consumed by all containers can be computed as

$\sum_{all\ containers} (physpages_{cur}\cdot4096+kmemsize_{cur}+allsocketbuf_{cur})\rm.$

The difference between memory usage shown by free(1) or /proc/meminfo and the total amount of RAM consumed by containers is the memory used by system daemons and different caches.

The memory utilization can be computed as

$\frac{\displaystyle\sum_{all\ containers} (physpages_{cur}\cdot4096+ kmemsize_{cur}+allsocketbuf_{cur})} {RAM\ size}\rm.$

Utilization levels from $0.8$ to $1$ are normal. Lower utilization means that the system is under-utilized, and, if other system resources and their commitment levels permit, the system can host more containers. By the nature of the accounting of physpages and other parameters, total RAM utilization can't be bigger than $1$.

 Memory and swap space

The main resource of the computer determining the amount of memory applications can use is the sum of RAM and swap sizes. If the total size of the used memory exceeds the RAM size, Linux kernel moves some data to swap and and loads it back when the application needs it. More frequently used data tends to stay in RAM, less frequently used data spends more time in swap.

Swap-in and swap-out activity reduces the system performance to some extent. However, if this activity is not excessive, the performance decrease is not very noticeable. On the other hand, the benefits of using swap space are quite big, allowing to increase the number of containers in the system about 2 times.

Swap space is essential for handling system load bursts. System with enough swap just slows down at high load bursts, whereas the system without swap reacts to high load bursts by refusing memory allocations (causing applications to refuse to accept clients or terminate) and by direct killing of some applications. Additionally, the presence of swap space helps the system to better balance memory and move data between “low memory” and the rest of the RAM.

In all OpenVZ installations it is strongly recommended to have swap of size not less than the RAM size.

Also, it is not recommended to create swap space of the size of more than 4 times RAM size because of performance degradation related to swap-in and swap-out activity. That is, the system should be configured so that $RAM\ size \le swap\ size \le 4\cdot RAM\ size\rm.$ The optimal configuration is when swap size is twice more than the RAM size.

 Utilization

$\frac{\displaystyle\sum_{all\ containers} (oomguarpages_{cur}\cdot4096+ kmemsize_{cur}+allsocketbuf_{cur})} {RAM\ size + swap\ size}\rm.$

The normal utilization of memory plus swap ranges between $\frac{RAM\ size}{RAM\ size + swap\ size}$ and $\frac{RAM\ size + \frac12swap\ size}{RAM\ size + swap\ size}.$

Lower utilization means that the system memory is under-utilized at the moment of checking the utilization. Higher utilization is likely to cause gradual performance degradation because of swap-in and swap-out activity and is a sign of overloading of the system.

 Commitment level

$\frac{\displaystyle\sum_{all\ containers} (oomguarpages_{bar}\cdot4096+ kmemsize_{lim}+allsocketbuf_{lim})} {RAM\ size + swap\ size}\rm.$

The normal commitment level is about $0.8$$1$.

Commitment levels more than $1$ means that the containers are guaranteed more memory than the system has. Such overcommitment is strongly not recommended, because in that case if all the memory is consumed, random applications, including the ones belonging to the host system, may be killed and the system may become inaccessible by ssh(1) and lose other important functionality.

It is better to guarantee containers less and have less commitment levels than to accidentally overcommit the system by memory plus swap. If the system has spare memory and swap, containers will transparently be able to use the memory and swap above their guarantees. Guarantees given to containers should not be big, and it is normal if memory and swap usage for some containers stays above their guarantee. It is also normal to give guarantees only to containers with preferred service. But administrators should not guarantee container more than the system actually has.

 Allocated memory

This subsection considers standard memory allocations made by applications in the container. The allocations for each container are controlled by two parameters: vmguarpages and privvmpages, discussed in sections FIXME.

Allocated memory is a more “virtual” system resource than the RAM or RAM plus swap size. Applications can allocate memory but start to use it only later, and the amount of system's free memory will decrease only at the moment of the use. The sum of allocated memory size of all containers is an estimation of how much physical memory will be used when (and if) all applications claim the allocated memory.

 Utilization

$\frac{\displaystyle\sum_{all\ containers} (privvmpages_{cur}\cdot4096+ kmemsize_{cur}+allsocketbuf_{cur})} {RAM\ size + swap\ size}\rm.$

This utilization level is the ratio of the amount of allocated memory to the capacity of the system.

Low utilization level means that the system can support more containers, if other resources permit. High utilization levels may, but doesn't necessarily mean that the system is overloaded. As it was explaned above, not all applications use all the allocated memory, so this utilization level may exceed $1$.

Computing this utilization level is useful for comparing it with the commitment level and the level of memory allocation restrictions, discussed below, to configure memory allocation restrictions for the container.

 Commitment level

Allocation guarantee commitment level

$\frac{\displaystyle\sum_{all\ containers} (vmguarpages_{bar}\cdot4096+ kmemsize_{lim}+allsocketbuf_{lim})} {RAM\ size + swap\ size}$

is the ratio of the memory space guaranteed to be available for allocations to the capacity of the system. Similarly to the commitment level of memory plus swap space (as discussed in subsection Memory and swap space), this level should be kept below $1$. If the level is above $1$, it significantly increases the chances of applications to be killed instead of be notified in case the system experiences a memory shortage.

It's better to provide lower guarantees than accidently guarantee more than the system has, because containers are allowed to allocated memory above their guarantee if the system is not tight on memory. It is also normal to give guarantees only to containers with preferred service.

 Limiting memory allocations

In addition to providing allocation guarantees, it is possible to impose restrictions on the amount of memory allocated by containers.

If a system has multiple containers, it is important to make sure that for each container

$privvmpages_{lim}\cdot4096 \le 0.6\cdot {RAM\ size}\rm.$

If this condition is not satisfied, a single container may easily cause an excessive swap-out and very bad performance of the whole system. Usually, for each container privvmpages limitations are to values much less than the size of the RAM.

The resource control parameters should be configured in a way, so that in case of memory shortage applications are given chance to notice the shortage and exit gracefully, instead of being terminated by the kernel. For this purpose, it is recommended to maintain reasonable total level of memory allocation restrictions, computed as

$\frac{\displaystyle\sum_{all\ containers} (privvmpages_{lim}\cdot4096+ kmemsize_{lim}+allsocketbuf_{lim})} {RAM\ size + swap\ size}\rm.$

This number shows how much memory applications are allowed to allocate in comparison with the capacity of the system.

In practice, a lot of applications do not use the memory very efficiently and, sometimes, allocated memory will never be used later. For example, Apache Web server at start time it allocates about 20–30%% more memory that it will ever use. Some multi-threaded applications are especially bad at using their memory, and their rate of allocated to used memory may happen to be 1000%.

The bigger the level of memory allocation restrictions, the more chances are that applications will be killed instead of getting an error on next memory allocation in case the system experiences memory shortage. The levels ranging in $1.5$$4$ can be considered acceptable. Administrators can find experimentally the optimal setting for their load, basing on the frequency of messages “Out of Memory: killing process” in system logs, saved by klogd(8) and syslogd(8). However, for stability-critical applications, it's better to keep the level not exceeding $1$.

1. More precisely, configurations with excessive overcommitment, as explained below.