Difference between revisions of "UBC secondary parameters"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(created (needs more work))
 
m (Reverted edits by 177.36.241.25 (talk) to last revision by Kir)
 
(28 intermediate revisions by 9 users not shown)
Line 1: Line 1:
 
{{UBC toc}}
 
{{UBC toc}}
  
Secondary (dependant) UBC parameters are directly connected
+
'''Secondary (dependant) UBC parameters''' are directly connected
 
to the [[UBC primary parameters|primary ones]] and can't be configured arbitrarily.
 
to the [[UBC primary parameters|primary ones]] and can't be configured arbitrarily.
  
 
== kmemsize ==
 
== kmemsize ==
Size of unswappable memory, allocated by the operating system kernel.
+
Size of unswappable memory in bytes, allocated by the operating system kernel.
  
 
It includes all the kernel internal data structures associated with the
 
It includes all the kernel internal data structures associated with the
Virtual Environment's processes, except the network buffers discussed below.
+
container's processes, except the network buffers discussed below.
 
These data structures reside in the first gigabyte of the computer's RAM,
 
These data structures reside in the first gigabyte of the computer's RAM,
 
so called [[UBC systemwide configuration#“Low memory”|“low memory”]].
 
so called [[UBC systemwide configuration#“Low memory”|“low memory”]].
Line 19: Line 19:
 
It is important to have a certain safety gap between the <code>barrier</code> and
 
It is important to have a certain safety gap between the <code>barrier</code> and
 
the <code>limit</code> of the <code>kmemsize</code> parameter
 
the <code>limit</code> of the <code>kmemsize</code> parameter
(for example, 10%, as in [[UBC examples]]).  Equal <code>barrier</code> and <pre>limit</pre> of
+
(for example, 10%, as in [[UBC configuration examples]]).  Equal <code>barrier</code> and <code>limit</code> of
 
the <code>kmemsize</code> parameter may lead to the situation where the kernel will
 
the <code>kmemsize</code> parameter may lead to the situation where the kernel will
need to kill Virtual Environment's applications to keep the <code>kmemsize</code>
+
need to kill container's applications to keep the <code>kmemsize</code>
 
usage under the limit.
 
usage under the limit.
  
 
<code>Kmemsize</code> limits can't be set arbitrarily high.
 
<code>Kmemsize</code> limits can't be set arbitrarily high.
The total amount of <code>kmemsize</code> consumable by all Virtual Environments
+
The total amount of <code>kmemsize</code> consumable by all containers
 
in the system plus the socket buffer space (see below) is limited by the
 
in the system plus the socket buffer space (see below) is limited by the
 
hardware resources of the system.
 
hardware resources of the system.
Line 38: Line 38:
 
socket buffer memory for each socket, as discussed in [[UBC consistency check]]:
 
socket buffer memory for each socket, as discussed in [[UBC consistency check]]:
  
<math>tcpsndbuf_{lim} - tcpsndbuf_{bar} \ge 2.5KB \cdot numtcpsock \rm.\cr}</math>
+
<math>tcpsndbuf_{lim} - tcpsndbuf_{bar} \ge 2.5KB \cdot numtcpsock \rm.</math>
  
 
If this restriction is not satisfied, some network connections
 
If this restriction is not satisfied, some network connections
Line 49: Line 49:
 
do not have strong negative effect on the applications, but just reduce
 
do not have strong negative effect on the applications, but just reduce
 
performance of network communications.
 
performance of network communications.
 +
 +
If you use rtorrent in a container, a low value for <code>tcpsndbuf</code> may cause rtorrent to take unusual amount of cpu. In this case, you must put a higher value. Also watch the number of failcnt in /proc/user_beancounters.
  
 
<code>Tcpsndbuf</code> limits can't be set arbitrarily high.
 
<code>Tcpsndbuf</code> limits can't be set arbitrarily high.
The total amount of <code>tcpsndbuf</code> consumable by all Virtual Environments
+
The total amount of <code>tcpsndbuf</code> consumable by all containers
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
by the hardware resources of the system.
 
by the hardware resources of the system.
Line 64: Line 66:
 
socket buffer memory for each socket, as discussed in [[UBC consistency check]]:
 
socket buffer memory for each socket, as discussed in [[UBC consistency check]]:
  
<math>
+
<math>tcprcvbuf_{lim} - tcprcvbuf_{bar} \ge 2.5KB \cdot numtcpsock \rm.</math>
tcprcvbuf_{lim} - tcprcvbuf_{bar} \ge 2.5KB \cdot numtcpsock \rm.\cr}
 
</math>
 
  
 
If this restriction is not satisfied, some network connections
 
If this restriction is not satisfied, some network connections
Line 84: Line 84:
  
 
<code>Tcprcvbuf</code> limits can't be set arbitrarily high.
 
<code>Tcprcvbuf</code> limits can't be set arbitrarily high.
The total amount of <code>tcprcvbuf</code> consumable by all Virtual Environments
+
The total amount of <code>tcprcvbuf</code> consumable by all containers
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
in the system plus the <code>kmemsize</code> and other socket buffers is limited
 
by the hardware resources of the system.
 
by the hardware resources of the system.
This total limit is discussed in [UBC systemwide configuration#“Low memory”|“low memory”]].
+
This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]].
 +
 
 +
== othersockbuf ==
 +
The total size of buffers used by local (UNIX-domain) connections between
 +
processes inside the system (such as connections to a local database server)
 +
and send buffers of UDP and other datagram protocols.
 +
 
 +
<code>Othersockbuf</code> parameter depends on number of non-TCP sockets (<code>[[numothersock]]</code>).
 +
 
 +
<code>Othersockbuf</code> configuration should satisfy
 +
 
 +
<math>othersockbuf_{lim} - othersockbuf_{bar} \ge 2.5KB \cdot numothersock.</math>
 +
 
 +
Increased limit for <code>othersockbuf</code> is necessary for high performance of
 +
communications through local (UNIX-domain) sockets.
 +
However, similarly to <code>tcpsndbuf</code>, hitting <code>othersockbuf</code> affects
 +
the communication performance only and does not affect the functionality.
 +
 
 +
<code>Othersockbuf</code> limits can't be set arbitrarily high.
 +
The total amount of <code>othersockbuf</code> consumable by all containers
 +
in the system plus the <code>kmemsize</code> and other socket buffers
 +
is limited by the hardware resources of the system.
 +
This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]].
 +
 
 +
== dgramrcvbuf ==
 +
The total size of buffers used to temporary store the incoming packets of UDP and
 +
other datagram protocols.
 +
 
 +
<code>Dgramrcvbuf</code> parameters depend on number of
 +
non-TCP sockets (<code>[[numothersock]]</code>).
 +
 
 +
<code>Dgramrcvbuf</code> limits usually don't need to be high.
 +
Only if the containers needs to send and receive very large
 +
datagrams, the <code>barrier</code>s for both <code>othersockbuf</code> and
 +
<code>dgramrcvbuf</code> parameters should be raised.
 +
 
 +
Hitting <code>dgramrcvbuf</code> means that some datagrams are dropped, which may
 +
or may not be important for application functionality.
 +
UDP is a protocol with not guaranteed delivery, so even if the buffers
 +
permit, the datagrams may be as well dropped later on any stage of the
 +
processing, and applications should be prepared for it.
 +
 
 +
Unlike other socket buffer parameters, for <code>dgramrcvbuf</code>
 +
the <code>barrier</code> should be set to the <code>limit</code>.
 +
 
 +
<code>Dgramrcvbuf</code> limits can't be set arbitrarily high.
 +
The total amount of <code>dgramrcvbuf</code> consumable by all containers
 +
in the system plus the <code>kmemsize</code> and other socket buffers
 +
is limited by the hardware resources of the system.
 +
This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]].
 +
 
 +
== oomguarpages ==
 +
The guaranteed amount of memory for the case the memory is “over-booked”
 +
(out-of-memory kill guarantee).
 +
 
 +
<code>Oomguarpages</code> parameter is related to <code>[[vmguarpages]]</code>.
 +
If applications start to consume more memory than the computer has,
 +
the system faces an out-of-memory condition.
 +
In this case the operating system will start to kill container's
 +
processes to free some memory and prevent the total death
 +
of the system.  Although it happens very rarely in typical system loads,
 +
killing processes in out-of-memory situations is a normal reaction of the
 +
system, and it is built into every Linux kernel<ref>The possible reasons of out-of-memory situations are the excess of total <code>[[vmguarpages]]</code> guarantees the available physical resources or high memory consumption by system processes.  Also, the kernel might allow some containers to allocate memory above their <code>[[vmguarpages]]</code> guarantees when the system had a lot of free memory, and later, when other containers claim their guarantees, the system will experience the memory shortage.</ref>.
 +
 
 +
<code>[[Oomguarpages]]</code> parameter accounts the total amount of
 +
memory and swap space used by the processes of a particular
 +
container.
 +
The <code>barrier</code> of the <code>oomguarpages</code> parameter is the out-of-memory
 +
guarantee.
 +
 
 +
If the current usage of memory and swap space
 +
(the value of <code>oomguarpages</code>) plus the amount of used kernel memory
 +
(<code>[[kmemsize]]</code>) and socket buffers is below the <code>barrier</code>,
 +
processes in this container are guaranteed not to be killed in
 +
out-of-memory situations.
 +
If the system is in out-of-memory situation and there are several
 +
containers with <code>oomguarpages</code> excess, applications in the
 +
container with the biggest excess will be killed first.
 +
The <code>failcnt</code> counter of <code>oomguarpages</code> parameter
 +
increases when a process in this container is killed because
 +
of out-of-memory situation.
 +
 
 +
If the administrator needs to make sure that some application won't be
 +
forcedly killed regardless of the application's behavior,
 +
setting the <code>[[privvmpages]]</code> limit to a value not greater than the
 +
<code>oomguarpages</code> guarantee significantly reduce the likelihood of
 +
the application being killed,
 +
and setting it to a half of the <code>oomguarpages</code> guarantee completely
 +
prevents it.
 +
Such configurations are not popular because they significantly reduce
 +
the utilization of the hardware.
 +
 
 +
The meaning of the <code>limit</code> for the <code>oomguarpages</code> parameter is
 +
unspecified in the current version.
 +
 
 +
The total out-of-memory guarantees given to the containers should
 +
not exceed the physical capacity of the computer, as discussed in [[UBC systemwide configuration#Memory and swap space]].
 +
If guarantees are given for more than the system has, in out-of-memory
 +
situations applications in containers with guaranteed level of
 +
service and system daemons may be killed.
 +
 
 +
== privvmpages ==
 +
Memory allocation limit in [[Memory_page|pages]] (which are typically 4096 bytes in size).
 +
 
 +
<code>Privvmpages</code> parameter
 +
allows controlling the amount of memory allocated by applications.
 +
 
 +
The <code>barrier</code> and the <code>limit</code> of <code>privvmpages</code> parameter
 +
control the upper boundary of the total size of allocated memory.
 +
Note that this upper boundary doesn't guarantee that the container
 +
will be able to allocate that much memory, neither does it guarantee that
 +
other containers will be able to allocate their fair share of
 +
memory.
 +
The primary mechanism to control memory allocation is the <code>[[vmguarpages]]</code>
 +
guarantee.
 +
 
 +
<code>Privvmpages</code> parameter accounts allocated (but, possibly,
 +
not used yet) memory.
 +
The accounted value is an estimation how much memory will be really consumed
 +
when the container's applications start to use the allocated
 +
memory.
 +
Consumed memory is accounted into <code>[[oomguarpages]]</code> parameter.
 +
 
 +
Since the memory accounted into <code>privvmpages</code> may not be actually used,
 +
the sum of current <code>privvmpages</code> values for all containers
 +
may exceed the RAM and swap size of the computer.
 +
 
 +
There should be a safety gap between the <code>barrier</code> and the <code>limit</code>
 +
for <code>privvmpages</code> parameter to reduce the number of memory allocation
 +
failures that the application is unable to handle.
 +
This gap will be used for “high-priority” memory allocations, such
 +
as process stack expansion.
 +
Normal priority allocations will fail when the <code>barrier</code> of
 +
<code>privvmpages</code> is reached.
 +
 
 +
Total <code>privvmpages</code> should correlate with the physical resources of the
 +
computer.
 +
Also, it is important not to allow any container to allocate a
 +
significant portion of all system RAM to avoid serious service level
 +
degradation for other containers.
 +
Both these configuration requirements are discussed in [[UBC systemwide configuration#Allocated memory]].
 +
 
 +
There's also an article describing how [[user pages accounting]] works.
 +
 
 +
== System-wide limits ==
 +
All secondary parameters are related to memory.
 +
Total limits on memory-related parameters must not exceed the physical
 +
resources of the computer.
 +
The restrictions on the configuration of memory-related parameters are listed
 +
in [[UBC systemwide configuration]].
 +
Those restrictions are very important, because their violation may
 +
allow any container cause the whole system to hang.
 +
 
 +
== Notes ==
 +
<references/>

Latest revision as of 12:20, 21 October 2011

User Beancounters
Definition
/proc/user_beancounters
/proc/bc/
General information
Units of measurement
VSwap
Parameters description
Primary parameters
numproc, numtcpsock, numothersock, vmguarpages
Secondary parameters
kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, privvmpages
Auxiliary parameters
lockedpages, shmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent, swappages
Internals
User pages accounting
RSS fractions accounting
On-demand accounting
UBC consistency
Consistency formulae
System-wide configuration
vzubc(8)
Configuration examples
Basic
Derived
Intermediate configurations
Tables
List of parameters
Parameter properties
Consistency
Config examples

Secondary (dependant) UBC parameters are directly connected to the primary ones and can't be configured arbitrarily.

kmemsize[edit]

Size of unswappable memory in bytes, allocated by the operating system kernel.

It includes all the kernel internal data structures associated with the container's processes, except the network buffers discussed below. These data structures reside in the first gigabyte of the computer's RAM, so called “low memory”.

This parameter is related to the number of processes (numproc). Each process consumes certain amount of kernel memory — 24 kilobytes at minimum, 30–60 KB typically. Very large processes may consume much more than that.

It is important to have a certain safety gap between the barrier and the limit of the kmemsize parameter (for example, 10%, as in UBC configuration examples). Equal barrier and limit of the kmemsize parameter may lead to the situation where the kernel will need to kill container's applications to keep the kmemsize usage under the limit.

Kmemsize limits can't be set arbitrarily high. The total amount of kmemsize consumable by all containers in the system plus the socket buffer space (see below) is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

tcpsndbuf[edit]

The total size of buffers used to send data over TCP network connections. These socket buffers reside in “low memory”.

Tcpsndbuf parameter depends on number of TCP sockets (numtcpsock) and should allow for some minimal amount of socket buffer memory for each socket, as discussed in UBC consistency check:

If this restriction is not satisfied, some network connections may silently stall, being unable to transmit data.

Setting high values for tcpsndbuf parameter may, but doesn't necessarily, increase performance of network communications. Note that, unlike most other parameters, hitting tcpsndbuf limits and failed socket buffer allocations do not have strong negative effect on the applications, but just reduce performance of network communications.

If you use rtorrent in a container, a low value for tcpsndbuf may cause rtorrent to take unusual amount of cpu. In this case, you must put a higher value. Also watch the number of failcnt in /proc/user_beancounters.

Tcpsndbuf limits can't be set arbitrarily high. The total amount of tcpsndbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

tcprcvbuf[edit]

The total size of buffers used to temporary store the data coming from TCP network connections. These socket buffers also reside in “low memory”.

Tcprcvbuf parameter depends on number of TCP sockets (numtcpsock) and should allow for some minimal amount of socket buffer memory for each socket, as discussed in UBC consistency check:

If this restriction is not satisfied, some network connections may stall, being unable to receive data, and will be terminated after a couple of minutes.

Similarly to tcpsndbuf, setting high values for tcprcvbuf parameter may, but doesn't necessarily, increase performance of network communications. Hitting tcprcvbuf limits and failed socket buffer allocations do not have strong negative effect on the applications, but just reduce performance of network communications. However, staying above the barrier of tcprcvbuf parameter for a long time is less harmless than for tcpsndbuf. Long periods of exceeding the barrier may cause termination of some connections.

Tcprcvbuf limits can't be set arbitrarily high. The total amount of tcprcvbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

othersockbuf[edit]

The total size of buffers used by local (UNIX-domain) connections between processes inside the system (such as connections to a local database server) and send buffers of UDP and other datagram protocols.

Othersockbuf parameter depends on number of non-TCP sockets (numothersock).

Othersockbuf configuration should satisfy

Increased limit for othersockbuf is necessary for high performance of communications through local (UNIX-domain) sockets. However, similarly to tcpsndbuf, hitting othersockbuf affects the communication performance only and does not affect the functionality.

Othersockbuf limits can't be set arbitrarily high. The total amount of othersockbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

dgramrcvbuf[edit]

The total size of buffers used to temporary store the incoming packets of UDP and other datagram protocols.

Dgramrcvbuf parameters depend on number of non-TCP sockets (numothersock).

Dgramrcvbuf limits usually don't need to be high. Only if the containers needs to send and receive very large datagrams, the barriers for both othersockbuf and dgramrcvbuf parameters should be raised.

Hitting dgramrcvbuf means that some datagrams are dropped, which may or may not be important for application functionality. UDP is a protocol with not guaranteed delivery, so even if the buffers permit, the datagrams may be as well dropped later on any stage of the processing, and applications should be prepared for it.

Unlike other socket buffer parameters, for dgramrcvbuf the barrier should be set to the limit.

Dgramrcvbuf limits can't be set arbitrarily high. The total amount of dgramrcvbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

oomguarpages[edit]

The guaranteed amount of memory for the case the memory is “over-booked” (out-of-memory kill guarantee).

Oomguarpages parameter is related to vmguarpages. If applications start to consume more memory than the computer has, the system faces an out-of-memory condition. In this case the operating system will start to kill container's processes to free some memory and prevent the total death of the system. Although it happens very rarely in typical system loads, killing processes in out-of-memory situations is a normal reaction of the system, and it is built into every Linux kernel[1].

Oomguarpages parameter accounts the total amount of memory and swap space used by the processes of a particular container. The barrier of the oomguarpages parameter is the out-of-memory guarantee.

If the current usage of memory and swap space (the value of oomguarpages) plus the amount of used kernel memory (kmemsize) and socket buffers is below the barrier, processes in this container are guaranteed not to be killed in out-of-memory situations. If the system is in out-of-memory situation and there are several containers with oomguarpages excess, applications in the container with the biggest excess will be killed first. The failcnt counter of oomguarpages parameter increases when a process in this container is killed because of out-of-memory situation.

If the administrator needs to make sure that some application won't be forcedly killed regardless of the application's behavior, setting the privvmpages limit to a value not greater than the oomguarpages guarantee significantly reduce the likelihood of the application being killed, and setting it to a half of the oomguarpages guarantee completely prevents it. Such configurations are not popular because they significantly reduce the utilization of the hardware.

The meaning of the limit for the oomguarpages parameter is unspecified in the current version.

The total out-of-memory guarantees given to the containers should not exceed the physical capacity of the computer, as discussed in UBC systemwide configuration#Memory and swap space. If guarantees are given for more than the system has, in out-of-memory situations applications in containers with guaranteed level of service and system daemons may be killed.

privvmpages[edit]

Memory allocation limit in pages (which are typically 4096 bytes in size).

Privvmpages parameter allows controlling the amount of memory allocated by applications.

The barrier and the limit of privvmpages parameter control the upper boundary of the total size of allocated memory. Note that this upper boundary doesn't guarantee that the container will be able to allocate that much memory, neither does it guarantee that other containers will be able to allocate their fair share of memory. The primary mechanism to control memory allocation is the vmguarpages guarantee.

Privvmpages parameter accounts allocated (but, possibly, not used yet) memory. The accounted value is an estimation how much memory will be really consumed when the container's applications start to use the allocated memory. Consumed memory is accounted into oomguarpages parameter.

Since the memory accounted into privvmpages may not be actually used, the sum of current privvmpages values for all containers may exceed the RAM and swap size of the computer.

There should be a safety gap between the barrier and the limit for privvmpages parameter to reduce the number of memory allocation failures that the application is unable to handle. This gap will be used for “high-priority” memory allocations, such as process stack expansion. Normal priority allocations will fail when the barrier of privvmpages is reached.

Total privvmpages should correlate with the physical resources of the computer. Also, it is important not to allow any container to allocate a significant portion of all system RAM to avoid serious service level degradation for other containers. Both these configuration requirements are discussed in UBC systemwide configuration#Allocated memory.

There's also an article describing how user pages accounting works.

System-wide limits[edit]

All secondary parameters are related to memory. Total limits on memory-related parameters must not exceed the physical resources of the computer. The restrictions on the configuration of memory-related parameters are listed in UBC systemwide configuration. Those restrictions are very important, because their violation may allow any container cause the whole system to hang.

Notes[edit]

  1. The possible reasons of out-of-memory situations are the excess of total vmguarpages guarantees the available physical resources or high memory consumption by system processes. Also, the kernel might allow some containers to allocate memory above their vmguarpages guarantees when the system had a lot of free memory, and later, when other containers claim their guarantees, the system will experience the memory shortage.