Difference between revisions of "UBC secondary parameters"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (Robot: Automated text replacement (-VEs +containers))
m (Robot: Automated text replacement (-Virtual Environment +container))
Line 8: Line 8:
  
 
It includes all the kernel internal data structures associated with the
 
It includes all the kernel internal data structures associated with the
Virtual Environment's processes, except the network buffers discussed below.
+
container's processes, except the network buffers discussed below.
 
These data structures reside in the first gigabyte of the computer's RAM,
 
These data structures reside in the first gigabyte of the computer's RAM,
 
so called [[UBC systemwide configuration#“Low memory”|“low memory”]].
 
so called [[UBC systemwide configuration#“Low memory”|“low memory”]].
Line 21: Line 21:
 
(for example, 10%, as in [[UBC configuration examples]]).  Equal <code>barrier</code> and <code>limit</code> of
 
(for example, 10%, as in [[UBC configuration examples]]).  Equal <code>barrier</code> and <code>limit</code> of
 
the <code>kmemsize</code> parameter may lead to the situation where the kernel will
 
the <code>kmemsize</code> parameter may lead to the situation where the kernel will
need to kill Virtual Environment's applications to keep the <code>kmemsize</code>
+
need to kill container's applications to keep the <code>kmemsize</code>
 
usage under the limit.
 
usage under the limit.
  
 
<code>Kmemsize</code> limits can't be set arbitrarily high.
 
<code>Kmemsize</code> limits can't be set arbitrarily high.
The total amount of <code>kmemsize</code> consumable by all Virtual Environments
+
The total amount of <code>kmemsize</code> consumable by all containers
 
in the system plus the socket buffer space (see below) is limited by the
 
in the system plus the socket buffer space (see below) is limited by the
 
hardware resources of the system.
 
hardware resources of the system.
Line 51: Line 51:
  
 
<code>Tcpsndbuf</code> limits can't be set arbitrarily high.
 
<code>Tcpsndbuf</code> limits can't be set arbitrarily high.
The total amount of <code>tcpsndbuf</code> consumable by all Virtual Environments
+
The total amount of <code>tcpsndbuf</code> consumable by all containers
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
by the hardware resources of the system.
 
by the hardware resources of the system.
Line 82: Line 82:
  
 
<code>Tcprcvbuf</code> limits can't be set arbitrarily high.
 
<code>Tcprcvbuf</code> limits can't be set arbitrarily high.
The total amount of <code>tcprcvbuf</code> consumable by all Virtual Environments
+
The total amount of <code>tcprcvbuf</code> consumable by all containers
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
by the hardware resources of the system.
 
by the hardware resources of the system.
Line 104: Line 104:
  
 
<code>Othersockbuf</code> limits can't be set arbitrarily high.
 
<code>Othersockbuf</code> limits can't be set arbitrarily high.
The total amount of <code>othersockbuf</code> consumable by all Virtual Environments
+
The total amount of <code>othersockbuf</code> consumable by all containers
 
in the system plus the <code>kmemsize</code> and other socket buffers
 
in the system plus the <code>kmemsize</code> and other socket buffers
 
is limited by the hardware resources of the system.
 
is limited by the hardware resources of the system.
Line 117: Line 117:
  
 
<code>Dgramrcvbuf</code> limits usually don't need to be high.
 
<code>Dgramrcvbuf</code> limits usually don't need to be high.
Only if the Virtual Environments needs to send and receive very large
+
Only if the containers needs to send and receive very large
 
datagrams, the <code>barrier</code>s for both <code>othersockbuf</code> and
 
datagrams, the <code>barrier</code>s for both <code>othersockbuf</code> and
 
<code>dgramrcvbuf</code> parameters should be raised.
 
<code>dgramrcvbuf</code> parameters should be raised.
Line 131: Line 131:
  
 
<code>Dgramrcvbuf</code> limits can't be set arbitrarily high.
 
<code>Dgramrcvbuf</code> limits can't be set arbitrarily high.
The total amount of <code>dgramrcvbuf</code> consumable by all Virtual Environments
+
The total amount of <code>dgramrcvbuf</code> consumable by all containers
 
in the system plus the <code>kmemsize</code> and other socket buffers
 
in the system plus the <code>kmemsize</code> and other socket buffers
 
is limited by the hardware resources of the system.
 
is limited by the hardware resources of the system.
Line 143: Line 143:
 
If applications start to consume more memory than the computer has,
 
If applications start to consume more memory than the computer has,
 
the system faces an out-of-memory condition.
 
the system faces an out-of-memory condition.
In this case the operating system will start to kill Virtual Environment's
+
In this case the operating system will start to kill container's
 
processes to free some memory and prevent the total death
 
processes to free some memory and prevent the total death
 
of the system.  Although it happens very rarely in typical system loads,
 
of the system.  Although it happens very rarely in typical system loads,
 
killing processes in out-of-memory situations is a normal reaction of the
 
killing processes in out-of-memory situations is a normal reaction of the
system, and it is built into every Linux kernel<ref>The possible reasons of out-of-memory situations are the excess of total <code>[[vmguarpages]]</code> guarantees the available physical resources or high memory consumption by system processes.  Also, the kernel might allow some Virtual Environments to allocate memory above their <code>[[vmguarpages]]</code> guarantees when the system had a lot of free memory, and later, when other Virtual Environments claim their guarantees, the system will experience the memory shortage.</ref>.
+
system, and it is built into every Linux kernel<ref>The possible reasons of out-of-memory situations are the excess of total <code>[[vmguarpages]]</code> guarantees the available physical resources or high memory consumption by system processes.  Also, the kernel might allow some containers to allocate memory above their <code>[[vmguarpages]]</code> guarantees when the system had a lot of free memory, and later, when other containers claim their guarantees, the system will experience the memory shortage.</ref>.
  
 
<code>[[Oomguarpages]]</code> parameter accounts the total amount of
 
<code>[[Oomguarpages]]</code> parameter accounts the total amount of
 
memory and swap space used by the processes of a particular
 
memory and swap space used by the processes of a particular
Virtual Environment.
+
container.
 
The <code>barrier</code> of the <code>oomguarpages</code> parameter is the out-of-memory
 
The <code>barrier</code> of the <code>oomguarpages</code> parameter is the out-of-memory
 
guarantee.
 
guarantee.
Line 158: Line 158:
 
(the value of <code>oomguarpages</code>) plus the amount of used kernel memory
 
(the value of <code>oomguarpages</code>) plus the amount of used kernel memory
 
(<code>[[kmemsize]]</code>) and socket buffers is below the <code>barrier</code>,
 
(<code>[[kmemsize]]</code>) and socket buffers is below the <code>barrier</code>,
processes in this Virtual Environment are guaranteed not to be killed in
+
processes in this container are guaranteed not to be killed in
 
out-of-memory situations.
 
out-of-memory situations.
 
If the system is in out-of-memory situation and there are several
 
If the system is in out-of-memory situation and there are several
Virtual Environments with <code>oomguarpages</code> excess, applications in the
+
containers with <code>oomguarpages</code> excess, applications in the
Virtual Environment with the biggest excess will be killed first.
+
container with the biggest excess will be killed first.
 
The <code>failcnt</code> counter of <code>oomguarpages</code> parameter
 
The <code>failcnt</code> counter of <code>oomguarpages</code> parameter
increases when a process in this Virtual Environment is killed because
+
increases when a process in this container is killed because
 
of out-of-memory situation.
 
of out-of-memory situation.
  
Line 180: Line 180:
 
unspecified in the current version.
 
unspecified in the current version.
  
The total out-of-memory guarantees given to the Virtual Environments should
+
The total out-of-memory guarantees given to the containers should
 
not exceed the physical capacity of the computer, as discussed in [[UBC systemwide configuration#Memory and swap space]].
 
not exceed the physical capacity of the computer, as discussed in [[UBC systemwide configuration#Memory and swap space]].
 
If guarantees are given for more than the system has, in out-of-memory
 
If guarantees are given for more than the system has, in out-of-memory
situations applications in Virtual Environments with guaranteed level of
+
situations applications in containers with guaranteed level of
 
service and system daemons may be killed.
 
service and system daemons may be killed.
  
Line 194: Line 194:
 
The <code>barrier</code> and the <code>limit</code> of <code>privvmpages</code> parameter
 
The <code>barrier</code> and the <code>limit</code> of <code>privvmpages</code> parameter
 
control the upper boundary of the total size of allocated memory.
 
control the upper boundary of the total size of allocated memory.
Note that this upper boundary doesn't guarantee that the Virtual Environment
+
Note that this upper boundary doesn't guarantee that the container
 
will be able to allocate that much memory, neither does it guarantee that
 
will be able to allocate that much memory, neither does it guarantee that
other Virtual Environments will be able to allocate their fair share of
+
other containers will be able to allocate their fair share of
 
memory.
 
memory.
 
The primary mechanism to control memory allocation is the <code>[[vmguarpages]]</code>
 
The primary mechanism to control memory allocation is the <code>[[vmguarpages]]</code>
Line 204: Line 204:
 
not used yet) memory.
 
not used yet) memory.
 
The accounted value is an estimation how much memory will be really consumed
 
The accounted value is an estimation how much memory will be really consumed
when the Virtual Environment's applications start to use the allocated
+
when the container's applications start to use the allocated
 
memory.
 
memory.
 
Consumed memory is accounted into <code>[[oomguarpages]]</code> parameter.
 
Consumed memory is accounted into <code>[[oomguarpages]]</code> parameter.
  
 
Since the memory accounted into <code>privvmpages</code> may not be actually used,
 
Since the memory accounted into <code>privvmpages</code> may not be actually used,
the sum of current <code>privvmpages</code> values for all Virtual Environments
+
the sum of current <code>privvmpages</code> values for all containers
 
may exceed the RAM and swap size of the computer.
 
may exceed the RAM and swap size of the computer.
  
Line 222: Line 222:
 
Total <code>privvmpages</code> should correlate with the physical resources of the
 
Total <code>privvmpages</code> should correlate with the physical resources of the
 
computer.
 
computer.
Also, it is important not to allow any Virtual Environment to allocate a
+
Also, it is important not to allow any container to allocate a
 
significant portion of all system RAM to avoid serious service level
 
significant portion of all system RAM to avoid serious service level
 
degradation for other containers.
 
degradation for other containers.
Line 241: Line 241:
 
in [[UBC systemwide configuration]].
 
in [[UBC systemwide configuration]].
 
Those restrictions are very important, because their violation may
 
Those restrictions are very important, because their violation may
allow any Virtual Environment cause the whole system to hang.
+
allow any container cause the whole system to hang.
  
 
== Notes ==
 
== Notes ==
 
<references/>
 
<references/>

Revision as of 10:41, 11 March 2008

User Beancounters
Definition
/proc/user_beancounters
/proc/bc/
General information
Units of measurement
VSwap
Parameters description
Primary parameters
numproc, numtcpsock, numothersock, vmguarpages
Secondary parameters
kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, privvmpages
Auxiliary parameters
lockedpages, shmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent, swappages
Internals
User pages accounting
RSS fractions accounting
On-demand accounting
UBC consistency
Consistency formulae
System-wide configuration
vzubc(8)
Configuration examples
Basic
Derived
Intermediate configurations
Tables
List of parameters
Parameter properties
Consistency
Config examples

Secondary (dependant) UBC parameters are directly connected to the primary ones and can't be configured arbitrarily.

kmemsize

Size of unswappable memory in bytes, allocated by the operating system kernel.

It includes all the kernel internal data structures associated with the container's processes, except the network buffers discussed below. These data structures reside in the first gigabyte of the computer's RAM, so called “low memory”.

This parameter is related to the number of processes (numproc). Each process consumes certain amount of kernel memory — 24 kilobytes at minimum, 30–60 KB typically. Very large processes may consume much more than that.

It is important to have a certain safety gap between the barrier and the limit of the kmemsize parameter (for example, 10%, as in UBC configuration examples). Equal barrier and limit of the kmemsize parameter may lead to the situation where the kernel will need to kill container's applications to keep the kmemsize usage under the limit.

Kmemsize limits can't be set arbitrarily high. The total amount of kmemsize consumable by all containers in the system plus the socket buffer space (see below) is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

tcpsndbuf

The total size of buffers used to send data over TCP network connections. These socket buffers reside in “low memory”.

Tcpsndbuf parameter depends on number of TCP sockets (numtcpsock) and should allow for some minimal amount of socket buffer memory for each socket, as discussed in UBC consistency check:

If this restriction is not satisfied, some network connections may silently stall, being unable to transmit data.

Setting high values for tcpsndbuf parameter may, but doesn't necessarily, increase performance of network communications. Note that, unlike most other parameters, hitting tcpsndbuf limits and failed socket buffer allocations do not have strong negative effect on the applications, but just reduce performance of network communications.

Tcpsndbuf limits can't be set arbitrarily high. The total amount of tcpsndbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

tcprcvbuf

The total size of buffers used to temporary store the data coming from TCP network connections. These socket buffers also reside in “low memory”.

Tcprcvbuf parameter depends on number of TCP sockets (numtcpsock) and should allow for some minimal amount of socket buffer memory for each socket, as discussed in UBC consistency check:

If this restriction is not satisfied, some network connections may stall, being unable to receive data, and will be terminated after a couple of minutes.

Similarly to tcpsndbuf, setting high values for tcprcvbuf parameter may, but doesn't necessarily, increase performance of network communications. Hitting tcprcvbuf limits and failed socket buffer allocations do not have strong negative effect on the applications, but just reduce performance of network communications. However, staying above the barrier of tcprcvbuf parameter for a long time is less harmless than for tcpsndbuf. Long periods of exceeding the barrier may cause termination of some connections.

Tcprcvbuf limits can't be set arbitrarily high. The total amount of tcprcvbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

othersockbuf

The total size of buffers used by local (UNIX-domain) connections between processes inside the system (such as connections to a local database server) and send buffers of UDP and other datagram protocols.

Othersockbuf parameter depends on number of non-TCP sockets (numothersock).

Othersockbuf configuration should satisfy

Increased limit for othersockbuf is necessary for high performance of communications through local (UNIX-domain) sockets. However, similarly to tcpsndbuf, hitting othersockbuf affects the communication performance only and does not affect the functionality.

Othersockbuf limits can't be set arbitrarily high. The total amount of othersockbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

dgramrcvbuf

The total size of buffers used to temporary store the incoming packets of UDP and other datagram protocols.

Dgramrcvbuf parameters depend on number of non-TCP sockets (numothersock).

Dgramrcvbuf limits usually don't need to be high. Only if the containers needs to send and receive very large datagrams, the barriers for both othersockbuf and dgramrcvbuf parameters should be raised.

Hitting dgramrcvbuf means that some datagrams are dropped, which may or may not be important for application functionality. UDP is a protocol with not guaranteed delivery, so even if the buffers permit, the datagrams may be as well dropped later on any stage of the processing, and applications should be prepared for it.

Unlike other socket buffer parameters, for dgramrcvbuf the barrier should be set to the limit.

Dgramrcvbuf limits can't be set arbitrarily high. The total amount of dgramrcvbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

oomguarpages

The guaranteed amount of memory for the case the memory is “over-booked” (out-of-memory kill guarantee).

Oomguarpages parameter is related to vmguarpages. If applications start to consume more memory than the computer has, the system faces an out-of-memory condition. In this case the operating system will start to kill container's processes to free some memory and prevent the total death of the system. Although it happens very rarely in typical system loads, killing processes in out-of-memory situations is a normal reaction of the system, and it is built into every Linux kernel[1].

Oomguarpages parameter accounts the total amount of memory and swap space used by the processes of a particular container. The barrier of the oomguarpages parameter is the out-of-memory guarantee.

If the current usage of memory and swap space (the value of oomguarpages) plus the amount of used kernel memory (kmemsize) and socket buffers is below the barrier, processes in this container are guaranteed not to be killed in out-of-memory situations. If the system is in out-of-memory situation and there are several containers with oomguarpages excess, applications in the container with the biggest excess will be killed first. The failcnt counter of oomguarpages parameter increases when a process in this container is killed because of out-of-memory situation.

If the administrator needs to make sure that some application won't be forcedly killed regardless of the application's behavior, setting the privvmpages limit to a value not greater than the oomguarpages guarantee significantly reduce the likelihood of the application being killed, and setting it to a half of the oomguarpages guarantee completely prevents it. Such configurations are not popular because they significantly reduce the utilization of the hardware.

The meaning of the limit for the oomguarpages parameter is unspecified in the current version.

The total out-of-memory guarantees given to the containers should not exceed the physical capacity of the computer, as discussed in UBC systemwide configuration#Memory and swap space. If guarantees are given for more than the system has, in out-of-memory situations applications in containers with guaranteed level of service and system daemons may be killed.

privvmpages

Memory allocation limit.

Privvmpages parameter allows controlling the amount of memory allocated by applications.

The barrier and the limit of privvmpages parameter control the upper boundary of the total size of allocated memory. Note that this upper boundary doesn't guarantee that the container will be able to allocate that much memory, neither does it guarantee that other containers will be able to allocate their fair share of memory. The primary mechanism to control memory allocation is the vmguarpages guarantee.

Privvmpages parameter accounts allocated (but, possibly, not used yet) memory. The accounted value is an estimation how much memory will be really consumed when the container's applications start to use the allocated memory. Consumed memory is accounted into oomguarpages parameter.

Since the memory accounted into privvmpages may not be actually used, the sum of current privvmpages values for all containers may exceed the RAM and swap size of the computer.

There should be a safety gap between the barrier and the limit for privvmpages parameter to reduce the number of memory allocation failures that the application is unable to handle. This gap will be used for “high-priority” memory allocations, such as process stack expansion. Normal priority allocations will fail when the barrier if privvmpages is reached.

Total privvmpages should correlate with the physical resources of the computer. Also, it is important not to allow any container to allocate a significant portion of all system RAM to avoid serious service level degradation for other containers. Both these configuration requirements are discussed in UBC systemwide configuration#Allocated memory.

There's also an article describing how user pages accounting works.

Units

Oomguarpages and privvmpages values are measured in memory pages. For other secondary parameters, the values are in bytes.

System-wide limits

All secondary parameters are related to memory. Total limits on memory-related parameters must not exceed the physical resources of the computer. The restrictions on the configuration of memory-related parameters are listed in UBC systemwide configuration. Those restrictions are very important, because their violation may allow any container cause the whole system to hang.

Notes

  1. The possible reasons of out-of-memory situations are the excess of total vmguarpages guarantees the available physical resources or high memory consumption by system processes. Also, the kernel might allow some containers to allocate memory above their vmguarpages guarantees when the system had a lot of free memory, and later, when other containers claim their guarantees, the system will experience the memory shortage.