Changes

Jump to: navigation, search

User Guide/Managing Resources

38,328 bytes added, 12:21, 14 January 2009
m
created (not finished yet)
The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary.

== What are Resource Control Parameters? ==
The system administrator controls the resources available to a Container through a set of resource management parameters. All these parameters are defined either in the OpenVZ global configuration file (<code>/etc/vz/vz.conf</code>), or in the respective CT configuration files (<code>/etc/vz/conf/''CTID''.conf</code>), or in both. You can set them by manually editing the corresponding configuration files, or by using the OpenVZ command-line utilities. These parameters can be divided into the disk, network, CPU, and system categories. The table below summarizes these groups:

{| class="wikitable"
! Group !! Description !! Parameter names !! Explained in
|-
| Disk
| This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings.
| DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT
| [[#Managing Disk Quotas]]
|-
| CPU
| This group of parameters defines the CPU time different CTs are guaranteed to receive.
| VE0CPUUNITS, CPUUNITS
| [[#Managing CPU Share]]
|-
| System
| This group of parameters defines various aspects of using system memory, TCP sockets, IP packets and like parameters by different CTs.
| avnumproc, numproc, numtcpsock, numothersock, vmguarpages, kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, lockedpages, shmpages, privvmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent
| [[#Managing System Parameters]]
|}

== Managing Disk Quotas ==
This section explains what disk quotas are, defines disk quota parameters, and describes how to perform disk quota related operations:
* Turning on and off per-CT (first-level) disk quotas;
* Setting up first-level disk quota parameters for a Container;
* Turning on and off per-user and per-group (second-level) disk quotas inside a Container;
* Setting up second-level quotas for a user or for a group;
* Checking disk quota statistics;
* Cleaning up Containers in certain cases.

=== What are Disk Quotas? ===
Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Virtual Private Sever administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.

By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.

The disk quota block size in OpenVZ is always 1024 bytes. It may differ from the block size of the underlying file system.

OpenVZ keeps quota usage statistics and limits in <code>/var/vzquota/quota.''ctid''</code> — a special quota file. The quota file has a special flag indicating whether the file is “dirty”. The file becomes dirty when its contents become inconsistent with the real CT usage. This means that when the disk space or inodes usage changes during the CT operation, these statistics are not automatically synchronized with the quota file, the file just gets the “dirty” flag. They are synchronized only when the CT is stopped or when the HN is shut down. After synchronization, the “dirty” flag is removed. If the Hardware Node has been incorrectly brought down (for example, the power switch was hit), the file remains “dirty”, and the quota is re-initialized on the next CT startup. This operation may noticeably increase the Node startup time. Thus, it is highly recommended to shut down the Hardware Node properly.

=== Disk Quota Parameters ===
The table below summarizes the disk quota parameters that you can control. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV).
{| class="wikitable"
! Parameter !! Description !! File
|-
| disk_quota
| Indicates whether first-level quotas are on or off for all CTs or for a separate CT.
| GV
|-
| diskspace
| Total size of disk space the CT may consume, in 1-Kb blocks.
| V
|-
| diskinodes
| Total number of disk inodes (files, directories, and symbolic links) the Container can allocate.
| V
|-
| quotatime
| The grace period for the disk quota overusage defined in seconds. The Container is allowed to temporarily exceed its quota soft limits for no more than the QUOTATIME period.
| V
|-
| quotaugidlimit
| Number of user/group IDs allowed for the CT internal disk quota. If set to 0, the UID/GID quota will not be enabled.
| V
|}

=== Turning On and Off Per-CT Disk Quotas ===
The parameter that defines whether to use first-level disk quotas is <code>DISK_QUOTA</code> in the OpenVZ global configuration file (<code>/etc/vz/vz.conf</code>). By setting it to “no”, you will disable OpenVZ quotas completely.

This parameter can be specified in the Container configuration file (<code>/etc/vz/conf/''ctid.conf''</code>) as well. In this case its value will take precedence of the one specified in the global configuration file. If you intend to have a mixture of Containers with quotas turned on and off, it is recommended to set the <code>DISK_QUOTA</code> value to “yes” in the global configuration file and to “no” in the configuration file of that CT which does not need quotas.

The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:
<pre>
''[checking that quota is on]''
# grep DISK_QUOTA /etc/vz/vz.conf
DISK_QUOTA=yes

''[checking available space on /vz partition]''
# df /vz
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda2 8957295 1421982 7023242 17% /vz

''[editing CT configuration file to add DISK_QUOTA=no]''
# vi /etc/vz/conf/101.conf

''[checking that quota is off for CT 101]''
# grep DISK_QUOTA /etc/vz/conf/101.conf
DISK_QUOTA=no

# vzctl start 101
Starting CT …
CT is mounted
Adding IP address(es): 192.168.1.101
Hostname for CT set: vps101.my.org
CT start in progress…
# vzctl exec 101 df
Filesystem 1k-blocks Used Available Use% Mounted on
simfs 8282373 747060 7023242 10% /
</pre>

As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.

{{Note|You must change the DISK_QUOTA parameter in the global OpenVZ configuration file only when all Containers are stopped, and in the CT configuration file – only when the corresponding CT is stopped. Otherwise, the configuration may prove inconsistent with the real quota usage, and this can interfere with the normal Hardware Node operation.}}

=== Setting Up Per-CT Disk Quota Parameters ===
Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file:
{| class="wikitable"
| DISKSPACE
| Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter.
|-
| DISKINODES
| Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter.
|-
| QUOTATIME
| The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter.
|}

The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line.

The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:

# vzctl set 101 --diskspace 1000000:1100000 --save
Saved parameters for CT 101
# vzctl set 101 --diskinodes 90000:91000 --save
Saved parameters for CT 101
# vzctl set 101 --quotatime 600 --save
Saved parameters for CT 101
# vzctl exec 101 df
Filesystem 1k-blocks Used Available Use% Mounted on
simfs 1000000 747066 252934 75% /
# vzctl exec 101 stat -f /
File: "/"
ID: 0 Namelen: 255 Type: ext2/ext3
Blocks: Total: 1000000 Free: 252934 Available: 252934 Size: 1024
Inodes: Total: 90000 Free: 9594
It is possible to change the first-level disk quota parameters for a running Container. The changes will take effect immediately. If you do not want your changes to persist till the next Container startup, do not use the –-save switch.

=== Turning On and Off Second-Level Quotas for Container ===
The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the CT configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.
If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results:
# Second-level (per-user and per-group) disk quotas are enabled for the given Container;
# The value that you assign to this parameter will be the limit for the number of file owners and groups of this CT, including Linux system users. Note that you will theoretically be able to create extra users of this CT, but if the number of file owners inside the CT has already reached the limit, these users will not be able to own files.

Enabling per-user/group quotas for a Container requires restarting the CT. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the CT /etc/passwd and /etc/group files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.

The session below turns on second-level quotas for Container 101:

# vzctl set 101 --quotaugidlimit 100 --save
Unable to apply new quota values: ugid quota not initialized
Saved parameters for CT 101
# vzctl stop 101; vzctl start 101
Stopping CT …
CT was stopped
CT is unmounted
Starting CT …
CT is mounted
Adding IP address(es): 192.168.1.101
Hostname for CT set: vps101.my.org
CT start in progress…

=== Setting Up Second-Level Disk Quota Parameters ===
In order to work with disk quotas inside a CT, you should have standard quota tools installed:
# vzctl exec 101 rpm -q quota
quota-3.12-5

This command shows that the quota package is installed into the Container. Use the utilities from this package (as is prescribed in your Linux manual) to set OpenVZ second-level quotas for the given CT. For example:

# '''ssh ve101'''
root@ve101's password:
Last login: Sat Jul 5 00:37:07 2003 from 10.100.40.18
[root@ve101 root]# '''edquota root'''
Disk quotas for user root (uid 0):
Filesystem blocks soft hard inodes soft hard
/dev/simfs 38216 50000 60000 45454 70000 70000
[root@ve101 root]# '''repquota -a'''
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root — 38218 50000 60000 45453 70000 70000
''[the rest of repquota output is skipped]''

[root@ve101 root]# '''dd if=/dev/zero of=test'''
dd: writing to `test': Disk quota exceeded
23473+0 records in
23472+0 records out
[root@ve101 root]# '''repquota -a'''
*** Report for user quotas on device /dev/simfs
Block grace time: 00:00; Inode grace time: 00:00
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root +- 50001 50000 60000 none 45454 70000 70000
''[the rest of repquota output is skipped]''

The above example shows the session when the root user has the disk space quota set to the hard limit of 60,000 1Kb blocks and to the soft limit of 50,000 1Kb blocks; both hard and soft limits for the number of inodes are set to 70,000.

It is also possible to set the grace period separately for block limits and inodes limits with the help of the /usr/sbin/setquota command. For more information on using the utilities from the quota package, please consult the system administration guide shipped with your Linux distribution or manual pages included in the package.

=== Checking Quota Status ===
As the Hardware Node system administrator, you can check the quota status for any Container with the vzquota stat and vzquota show commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at /var/vzquota/quota.vpsid) and shall be used for stopped Containers. Both commands have the same output format.
The session below shows a partial output of CT 101 quota statistics:

# '''vzquota stat 101 –t'''

resource usage softlimit hardlimit grace
1k-blocks 38281 1000000 1100000
inodes 45703 90000 91000
User/group quota: on,active
Ugids: loaded 34, total 34, limit 100
Ugid limit was exceeded: no

User/group grace times and quotafile flags:
type block_exp_time inode_exp_time dqi_flags
user 0h
group 0h

User/group objects:
ID type resource usage softlimit hardlimit grace status
0 user 1k-blocks 38220 50000 60000 loaded
0 user inodes 45453 70000 70000 loaded
''[the rest is skipped]''

The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.
If you do not need the second-level quota statistics, you can omit the –t switch from the vzquota command line.

== Managing CPU Share ==
The current section explains the CPU resource parameters (CPU share) that you can configure and monitor for each Container.
The table below provides the name and the description for the CPU parameters. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).
{| class="wikitable"
! Parameter
! Description
! File
|-
| ve0cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node.
| G
|-
| cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive.
| V
|-
| cpulimit
| This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed.
| V
|}

The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the cpulimit parameter is not defined.

To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:

# '''vzcpucheck'''
Current CPU utilization: 5166
Power of the node: 73072.5

The output of this command displays the total number of the so-called CPU units consumed by all running Containers and Hardware Node processes. This number is calculated by OpenVZ with the help of a special algorithm. The above example illustrates the situation when the Hardware Node is underused. In other words, the running Containers receive more CPU time than was guaranteed to them.

In the following example, Container 102 is guaranteed to receive about 2% of the CPU time even if the Hardware Node is fully used, or in other words, if the current CPU utilization equals the power of the Node. Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded:

# '''vzctl set 102 --cpuunits 1500 --cpulimit 4 --save'''
Saved parameters for CT 102
# '''vzctl start 102'''
Starting CT …
CT is mounted
Adding IP address(es): 192.168.1.102
CT start in progress…
# '''vzcpucheck'''
Current CPU utilization: 6667
Power of the node: 73072.5

Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.

== Managing System Parameters ==
The resources a Container may allocate are defined by the system resource control parameters. These parameters can be subdivided into the following categories: primary, secondary, and auxiliary parameters. The primary parameters are the start point for creating a Container configuration from scratch. The secondary parameters are dependent on the primary ones and are calculated from them according to a set of constraints. The auxiliary parameters help improve fault isolation among applications in one and the same Container and the way applications handle errors and consume resources. They also help enforce administrative policies on Containers by limiting the resources required by an application and preventing the application to run in the Container.

Listed below are all the system resource control parameters. The parameters starting with «num» are measured in integers. The parameters ending in «buf» or «size» are measured in bytes. The parameters containing «pages» in their names are measured in 4096-byte pages (IA32 architecture). The File column indicates that all the system parameters are defined in the corresponding CT configuration files (V).
{| class="wikitable"
|+ Primary parameters
! Parameter
! Description
! File
|-
| avnumproc
| The average number of processes and threads.
| V
|-
| numproc
| The maximal number of processes and threads the CT may create.
| V
|-
| numtcpsock
| The number of TCP sockets (PF_INET family, SOCK_STREAM type). This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel.
| V
|-
| numothersock
|The number of sockets other than TCP ones. Local (UNIX-domain) sockets are used for communications inside the system. UDP sockets are used, for example, for Domain Name Service (DNS) queries. UDP and other sockets may also be used in some very specialized applications (SNMP agents and others).
| V
|-
| vmguarpages
| The memory allocation guarantee, in pages (one page is 4 Kb). CT applications are guaranteed to be able to allocate additional memory so long as the amount of memory accounted as privvmpages (see the auxiliary parameters) does not exceed the configured barrier of the vmguarpages parameter. Above the barrier, additional memory allocation is not guaranteed and may fail in case of overall memory shortage.
| V
|}

{| class="wikitable"
|+ Secondary parameters
! Parameter
! Description
! File
|-
| kmemsize
| The size of unswappable kernel memory allocated for the internal kernel structures for the processes of a particular CT.
| V
|-
| tcpsndbuf
| The total size of send buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data sent from an application to a TCP socket, but not acknowledged by the remote side yet.
| V
|-
| tcprcvbuf
| The total size of receive buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data received from the remote side, but not read by the local application yet.
| V
|-
| othersockbuf
| The total size of UNIX-domain socket buffers, UDP, and other datagram protocol send buffers.
| V
|-
| dgramrcvbuf
| The total size of receive buffers of UDP and other datagram protocols.
| V
|-
| oomguarpages
| The out-of-memory guarantee, in pages (one page is 4 Kb). Any CT process will not be killed even in case of heavy memory shortage if the current memory consumption (including both physical memory and swap) does not reach the oomguarpages barrier.
| V
|}

{| class="wikitable"
|+ Auxiliary parameters
! Parameter
! Description
! File
|-
| lockedpages
| The memory not allowed to be swapped out (locked with the mlock() system call), in pages.
| V
|-
| shmpages
| The total size of shared memory (including IPC, shared anonymous mappings and tmpfs objects) allocated by the processes of a particular CT, in pages.
| V
|-
| privvmpages
| The size of private (or potentially private) memory allocated by an application. The memory that is always shared among different applications is not included in this resource parameter.
| V
|-
| numfile
| The number of files opened by all CT processes.
| V
|-
| numflock
| The number of file locks created by all CT processes.
| V
|-
| numpty
| The number of pseudo-terminals, such as an ssh session, the screen or xterm applications, etc.
|V
|-
| numsiginfo
| The number of siginfo structures (essentially, this parameter limits the size of the signal delivery queue).
| V
|-
| dcachesize
| The total size of dentry and inode structures locked in the memory.
| V
|-
| physpages
| The total size of RAM used by the CT processes. This is an accounting-only parameter currently. It shows the usage of RAM by the CT. For the memory pages used by several different CTs (mappings of shared libraries, for example), only the corresponding fraction of a page is charged to each CT. The sum of the physpages usage for all CTs corresponds to the total number of pages used in the system by all the accounted users.
| V
|-
| numiptent
| The number of IP packet filtering entries.
| V
|}

You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding CT by means of your favorite text editor (for example, vi or emacs), or by running the vzctl set command. For example:

# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
Saved parameters for CT 101

=== Monitoring System Resources Consumption ===
It is possible to check the system resource control parameters statistics from within a Container. The primary use of these statistics is to understand what particular resource has limits preventing an application to start. Moreover, these statistics report the current and maximal resources consumption for the running Container. This information can be obtained from the <code>/proc/user_beancounters</code> file.

The output below illustrates a typical session:

'''vzctl exec 101 cat /proc/user_beancounters'''
Version: 2.5
uid resource held maxheld barrier limit failcnt
101: kmemsize 803866 1246758 2457600 2621440 0
lockedpages 0 0 32 32 0
privvmpages 5611 7709 22528 24576 0
shmpages 39 695 8192 8192 0
dummy 0 0 0 0 0
numproc 16 27 65 65 0
physpages 1011 3113 0 2147483647 0
vmguarpages 0 0 6144 2147483647 0
oomguarpages 2025 3113 6144 2147483647 0
numtcpsock 3 4 80 80 0
numflock 2 4 100 110 0
numpty 0 1 16 16 0
numsiginfo 0 2 256 256 0
tcpsndbuf 0 6684 319488 524288 0
tcprcvbuf 0 4456 319488 524288 0
othersockbuf 2228 9688 132096 336896 0
dgramrcvbuf 0 4276 132096 132096 0
numothersock 4 17 80 80 0
dcachesize 78952 108488 524288 548864 0
numfile 194 306 1280 1280 0
dummy 0 0 0 0 0
dummy 0 0 0 0 0
dummy 0 0 0 0 0
numiptent 0 0 128 128 0

The '''failcnt''' column displays the number of unsuccessful attempts to allocate a particular resource. If this value increases after an application fails to start, then the corresponding resource limit is in effect lower than is needed by the application.

The '''held''' column displays the current resource usage, and the '''maxheld''' column – the maximal value of the resource consumption for the last accounting period. The meaning of the '''barrier''' and '''limit''' columns depends on the parameter and is explained in the [[UBC]] guide.

Inside a CT, the <code>/proc/user_beancounters</code> file displays the information on the given CT only, whereas from the Hardware Node this file displays the information on all the CTs.

=== Monitoring Memory Consumption ===
You can monitor a number of memory parameters for the whole Hardware Node and for particular Containers with the help of the <code>vzmemcheck</code> utility. For example:

# '''vzmemcheck -v'''
Output values in %
veid LowMem LowMem RAM MemSwap MemSwap Alloc Alloc Alloc
util commit util util commit util commit limit
101 0.19 1.93 1.23 0.34 1.38 0.42 1.38 4.94
1 0.27 8.69 1.94 0.49 7.19 1.59 2.05 56.54
----------------------------------------------------------------------
Summary: 0.46 10.62 3.17 0.83 8.57 2.02 3.43 61.48

The <code>–v</code> option is used to display the memory information for each Container and not for the Hardware Node in general. It is also possible to show the absolute values in megabytes by using the <code>–A</code> switch. The monitored parameters are (from left to right in the output above) low memory utilization, low memory commitment, RAM utilization, memory+swap utilization, memory+swap commitment, allocated memory utilization, allocated memory commitment, allocated memory limit.

To understand these parameters, let us first draw the distinction between utilization and commitment levels.

* ''Utilization level'' is the amount of resources consumed by CTs at the given time. In general, low utilization values mean that the system is under-utilized. Often, it means that the system is capable of supporting more Containers if the existing CTs continue to maintain the same load and resource consumption level. High utilization values (in general, more than 1, or 100%) mean that the system is overloaded and the service level of the Containers is degraded.

* ''Commitment level'' shows how much resources are “promised” to the existing Containers. Low commitment levels mean that the system is capable of supporting more Containers. Commitment levels more than 1 mean that the Containers are promised more resources than the system has, and the system is said to be overcommitted. If the system runs a lot of CTs, it is usually acceptable to have some overcommitment because it is unlikely that all Containers will request resources at one and the same time. However, very high commitment levels will cause CTs to fail to allocate and use the resources promised to them and may hurt system stability.

There follows an overview of resources checked up by the <code>vzmemcheck</code> utility. Their complete description is provided in the [[UBC]] guide.

The ''low memory'' is the most important RAM area representing the part of memory residing at lower addresses and directly accessible by the kernel. In OpenVZ, the size of the “low” memory area is limited to 832 MB in the UP (uniprocessor) and SMP versions of the kernel, and to 3.6 GB in the Enterprise version of the kernel. If the total size of the computer RAM is less than the limit (832 MB or 3.6 GB, respectively), then the actual size of the “low” memory area is equal to the total memory size.

The union of ''RAM and swap'' space is the main computer resource determining the amount of memory available to applications. If the total size of memory used by applications exceeds the RAM size, the Linux kernel moves some data to swap and loads it back when the application needs it. More frequently used data tends to stay in RAM, less frequently used data spends more time in swap. Swap-in and swap-out activity reduces the system performance to some extent. However, if this activity is not excessive, the performance decrease is not very noticeable. On the other hand, the benefits of using swap space are quite big, allowing to increase the number of Containers in the system by 2 times. Swap space is essential for handling system load bursts. A system with enough swap space just slows down at high load bursts, whereas a system without swap space reacts to high load bursts by refusing memory allocations (causing applications to refuse to accept clients or terminate) and directly killing some applications. Additionally, the presence of swap space helps the system better balance memory and move data between the low memory area and the rest of the RAM.

''Allocated memory'' is a more “virtual” system resource than the RAM or RAM plus swap space. Applications may allocate memory but start to use it only later, and only then will the amount of free physical memory really decrease. The sum of the sizes of memory allocated in all Containers is only the estimation of how much physical memory will be used if all applications claim the allocated memory. The memory available for allocation can be not only used (the '''Alloc util''' column) or promised (the '''Alloc commit''' column), but also limited (applications will not be able to allocate more resources than is indicated in the '''Alloc limit''' column).

== Managing CT Resources Configuration ==
Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways:
<ol>
<li>Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the [[#Creating and Configuring New Container]] section). They are stored in (<code>/etc/vz/</code> and have the <code>ve‑''name''.conf-sample</code> mask. Currently, the following configuration sample files are provided:
* light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters;
* vps.basic – to be used for common CTs.

{{Note|Configuration sample files cannot contain spaces in their names.}}

Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:

# vzctl set 101 --applyconfig light --save

This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li>

<li>Using OpenVZ specialized utilities for preparing configuration files in their entirety. The tasks these utilities perform are described in the following subsections of this section.</li>

<li>The direct creating and editing of the corresponding configuration file (<code>/etc/vz/conf/''CTID''.conf</code>). This can be performed either with the help of any text editor. The instructions on how to edit CT configuration files directly are provided in the four preceding sections. In this case you have to edit all the configuration parameters separately, one by one.</li>
</ol>

=== Splitting Hardware Node Into Equal Pieces ===
It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below:

# '''cd /etc/vz/'''
# '''vzsplit -n 20 -f vps.mytest'''
Config /etc/vz/ve-vps.mytest.conf-sample was created
# '''vzcfgvalidate ve-vps.mytest.conf-sample'''
Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 \
(currently, 126391)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622)

Note that the configuration produced depends on the given Hardware Node resources. Therefore, it is important to validate the resulted configuration file before trying to use it, which is done with the help of the <code>vzcfgvalidate</code> utility.

The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above:

# '''vzctl create 101 --ostemplate fedora-core-4 --config vps.mytest'''
Creating CT private area: /vz/private/101
CT private area was created
# '''vzctl set 101 --ipadd 192.168.1.101 --save'''
Saved parameters for CT 101
# '''vzctl start 101'''
Starting CT …
CT is mounted
Adding IP address(es): 192.168.1.101
CT start in progress…
# '''vzcalc 101'''
Resource Current(%) Promised(%) Max(%)
Memory 0.53 1.90 6.44

As is seen, if Containers use all the resources guaranteed to them, then around 20 CTs can be simultaneously running. However, taking into account the Promised column output, it is safe to run 40–50 such Containers on this Hardware Node.

=== Validating Container Configuration ===
The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis.
The typical validation scenario is shown below:

# '''vzcfgvalidate /etc/vz/conf/101.conf'''
Error: kmemsize.bar should be > 1835008 (currently, 25000)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
Recommendation: othersockbuf.bar should be > 132096 (currently, 122880)
# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
Saved parameters for CT 101
# '''vzcfgvalidate /etc/vz/conf/101.conf'''
Recommendation: kmemsize.lim-kmemsize.bar should be > 163840 (currently, 147456)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)
Validation completed: success

The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:
{| class="wikitable"
! Recommendation
| This is a suggestion, which is not critical for Container or Hardware Node operations. The configuration is valid in general; however, if the system has enough memory, it is better to increase the settings as advised.
|-
! Warning
| A constraint is not satisfied, and the configuration is invalid. The Container applications may not have optimal performance or may fail in an ungraceful way.
|-
! Error
| An important constraint is not satisfied, and the configuration is invalid. The Container applications have increased chances to fail unexpectedly, to be terminated, or to hang.
|}

In the scenario above, the first run of the vzcfgvalidate utility found a critical error for the kmemsize parameter value. After setting reasonable values for kmemsize, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration.

Navigation menu