# UBC secondary parameters

(Redirected from Tcpsndbuf)
User Beancounters
Definition
/proc/user_beancounters
/proc/bc/
General information
Units of measurement
VSwap
Parameters description
Primary parameters
numproc, numtcpsock, numothersock, vmguarpages
Secondary parameters
kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, privvmpages
Auxiliary parameters
lockedpages, shmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent, swappages
Internals
User pages accounting
On-demand accounting
UBC consistency
Consistency formulae
System-wide configuration
vzubc(8)
Configuration examples
Basic
Derived
Intermediate configurations
Tables
List of parameters
Parameter properties
Consistency
Config examples

Secondary (dependant) UBC parameters are directly connected to the primary ones and can't be configured arbitrarily.

## kmemsize

Size of unswappable memory in bytes, allocated by the operating system kernel.

It includes all the kernel internal data structures associated with the container's processes, except the network buffers discussed below. These data structures reside in the first gigabyte of the computer's RAM, so called “low memory”.

This parameter is related to the number of processes (numproc). Each process consumes certain amount of kernel memory — 24 kilobytes at minimum, 30–60 KB typically. Very large processes may consume much more than that.

It is important to have a certain safety gap between the barrier and the limit of the kmemsize parameter (for example, 10%, as in UBC configuration examples). Equal barrier and limit of the kmemsize parameter may lead to the situation where the kernel will need to kill container's applications to keep the kmemsize usage under the limit.

Kmemsize limits can't be set arbitrarily high. The total amount of kmemsize consumable by all containers in the system plus the socket buffer space (see below) is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

## tcpsndbuf

The total size of buffers used to send data over TCP network connections. These socket buffers reside in “low memory”.

Tcpsndbuf parameter depends on number of TCP sockets (numtcpsock) and should allow for some minimal amount of socket buffer memory for each socket, as discussed in UBC consistency check:

${\displaystyle tcpsndbuf_{lim}-tcpsndbuf_{bar}\geq 2.5KB\cdot numtcpsock{\rm {.}}}$

If this restriction is not satisfied, some network connections may silently stall, being unable to transmit data.

Setting high values for tcpsndbuf parameter may, but doesn't necessarily, increase performance of network communications. Note that, unlike most other parameters, hitting tcpsndbuf limits and failed socket buffer allocations do not have strong negative effect on the applications, but just reduce performance of network communications.

If you use rtorrent in a container, a low value for tcpsndbuf may cause rtorrent to take unusual amount of cpu. In this case, you must put a higher value. Also watch the number of failcnt in /proc/user_beancounters.

Tcpsndbuf limits can't be set arbitrarily high. The total amount of tcpsndbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

## tcprcvbuf

The total size of buffers used to temporary store the data coming from TCP network connections. These socket buffers also reside in “low memory”.

Tcprcvbuf parameter depends on number of TCP sockets (numtcpsock) and should allow for some minimal amount of socket buffer memory for each socket, as discussed in UBC consistency check:

${\displaystyle tcprcvbuf_{lim}-tcprcvbuf_{bar}\geq 2.5KB\cdot numtcpsock{\rm {.}}}$

If this restriction is not satisfied, some network connections may stall, being unable to receive data, and will be terminated after a couple of minutes.

Similarly to tcpsndbuf, setting high values for tcprcvbuf parameter may, but doesn't necessarily, increase performance of network communications. Hitting tcprcvbuf limits and failed socket buffer allocations do not have strong negative effect on the applications, but just reduce performance of network communications. However, staying above the barrier of tcprcvbuf parameter for a long time is less harmless than for tcpsndbuf. Long periods of exceeding the barrier may cause termination of some connections.

Tcprcvbuf limits can't be set arbitrarily high. The total amount of tcprcvbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

## othersockbuf

The total size of buffers used by local (UNIX-domain) connections between processes inside the system (such as connections to a local database server) and send buffers of UDP and other datagram protocols.

Othersockbuf parameter depends on number of non-TCP sockets (numothersock).

Othersockbuf configuration should satisfy

${\displaystyle othersockbuf_{lim}-othersockbuf_{bar}\geq 2.5KB\cdot numothersock.}$

Increased limit for othersockbuf is necessary for high performance of communications through local (UNIX-domain) sockets. However, similarly to tcpsndbuf, hitting othersockbuf affects the communication performance only and does not affect the functionality.

Othersockbuf limits can't be set arbitrarily high. The total amount of othersockbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

## dgramrcvbuf

The total size of buffers used to temporary store the incoming packets of UDP and other datagram protocols.

Dgramrcvbuf parameters depend on number of non-TCP sockets (numothersock).

Dgramrcvbuf limits usually don't need to be high. Only if the containers needs to send and receive very large datagrams, the barriers for both othersockbuf and dgramrcvbuf parameters should be raised.

Hitting dgramrcvbuf means that some datagrams are dropped, which may or may not be important for application functionality. UDP is a protocol with not guaranteed delivery, so even if the buffers permit, the datagrams may be as well dropped later on any stage of the processing, and applications should be prepared for it.

Unlike other socket buffer parameters, for dgramrcvbuf the barrier should be set to the limit.

Dgramrcvbuf limits can't be set arbitrarily high. The total amount of dgramrcvbuf consumable by all containers in the system plus the kmemsize and other socket buffers is limited by the hardware resources of the system. This total limit is discussed in “low memory”.

## oomguarpages

The guaranteed amount of memory for the case the memory is “over-booked” (out-of-memory kill guarantee).

Oomguarpages parameter is related to vmguarpages. If applications start to consume more memory than the computer has, the system faces an out-of-memory condition. In this case the operating system will start to kill container's processes to free some memory and prevent the total death of the system. Although it happens very rarely in typical system loads, killing processes in out-of-memory situations is a normal reaction of the system, and it is built into every Linux kernel[1].

Oomguarpages parameter accounts the total amount of memory and swap space used by the processes of a particular container. The barrier of the oomguarpages parameter is the out-of-memory guarantee.

If the current usage of memory and swap space (the value of oomguarpages) plus the amount of used kernel memory (kmemsize) and socket buffers is below the barrier, processes in this container are guaranteed not to be killed in out-of-memory situations. If the system is in out-of-memory situation and there are several containers with oomguarpages excess, applications in the container with the biggest excess will be killed first. The failcnt counter of oomguarpages parameter increases when a process in this container is killed because of out-of-memory situation.

If the administrator needs to make sure that some application won't be forcedly killed regardless of the application's behavior, setting the privvmpages limit to a value not greater than the oomguarpages guarantee significantly reduce the likelihood of the application being killed, and setting it to a half of the oomguarpages guarantee completely prevents it. Such configurations are not popular because they significantly reduce the utilization of the hardware.

The meaning of the limit for the oomguarpages parameter is unspecified in the current version.

The total out-of-memory guarantees given to the containers should not exceed the physical capacity of the computer, as discussed in UBC systemwide configuration#Memory and swap space. If guarantees are given for more than the system has, in out-of-memory situations applications in containers with guaranteed level of service and system daemons may be killed.

## privvmpages

Memory allocation limit in pages (which are typically 4096 bytes in size).

Privvmpages parameter allows controlling the amount of memory allocated by applications.

The barrier and the limit of privvmpages parameter control the upper boundary of the total size of allocated memory. Note that this upper boundary doesn't guarantee that the container will be able to allocate that much memory, neither does it guarantee that other containers will be able to allocate their fair share of memory. The primary mechanism to control memory allocation is the vmguarpages guarantee.

Privvmpages parameter accounts allocated (but, possibly, not used yet) memory. The accounted value is an estimation how much memory will be really consumed when the container's applications start to use the allocated memory. Consumed memory is accounted into oomguarpages parameter.

Since the memory accounted into privvmpages may not be actually used, the sum of current privvmpages values for all containers may exceed the RAM and swap size of the computer.

There should be a safety gap between the barrier and the limit for privvmpages parameter to reduce the number of memory allocation failures that the application is unable to handle. This gap will be used for “high-priority” memory allocations, such as process stack expansion. Normal priority allocations will fail when the barrier of privvmpages is reached.

Total privvmpages should correlate with the physical resources of the computer. Also, it is important not to allow any container to allocate a significant portion of all system RAM to avoid serious service level degradation for other containers. Both these configuration requirements are discussed in UBC systemwide configuration#Allocated memory.

There's also an article describing how user pages accounting works.

## System-wide limits

All secondary parameters are related to memory. Total limits on memory-related parameters must not exceed the physical resources of the computer. The restrictions on the configuration of memory-related parameters are listed in UBC systemwide configuration. Those restrictions are very important, because their violation may allow any container cause the whole system to hang.

## Notes

1. The possible reasons of out-of-memory situations are the excess of total vmguarpages guarantees the available physical resources or high memory consumption by system processes. Also, the kernel might allow some containers to allocate memory above their vmguarpages guarantees when the system had a lot of free memory, and later, when other containers claim their guarantees, the system will experience the memory shortage.