| Latest revision |
Your text |
| Line 5: |
Line 5: |
| | | | |
| | == kmemsize == | | == kmemsize == |
| − | Size of unswappable memory in bytes, allocated by the operating system kernel. | + | Size of unswappable memory, allocated by the operating system kernel. |
| | | | |
| | It includes all the kernel internal data structures associated with the | | It includes all the kernel internal data structures associated with the |
| − | container's processes, except the network buffers discussed below.
| + | Virtual Environment's processes, except the network buffers discussed below. |
| | These data structures reside in the first gigabyte of the computer's RAM, | | These data structures reside in the first gigabyte of the computer's RAM, |
| | so called [[UBC systemwide configuration#“Low memory”|“low memory”]]. | | so called [[UBC systemwide configuration#“Low memory”|“low memory”]]. |
| Line 19: |
Line 19: |
| | It is important to have a certain safety gap between the <code>barrier</code> and | | It is important to have a certain safety gap between the <code>barrier</code> and |
| | the <code>limit</code> of the <code>kmemsize</code> parameter | | the <code>limit</code> of the <code>kmemsize</code> parameter |
| − | (for example, 10%, as in [[UBC configuration examples]]). Equal <code>barrier</code> and <code>limit</code> of | + | (for example, 10%, as in [[UBC examples]]). Equal <code>barrier</code> and <code>limit</code> of |
| | the <code>kmemsize</code> parameter may lead to the situation where the kernel will | | the <code>kmemsize</code> parameter may lead to the situation where the kernel will |
| − | need to kill container's applications to keep the <code>kmemsize</code> | + | need to kill Virtual Environment's applications to keep the <code>kmemsize</code> |
| | usage under the limit. | | usage under the limit. |
| | | | |
| | <code>Kmemsize</code> limits can't be set arbitrarily high. | | <code>Kmemsize</code> limits can't be set arbitrarily high. |
| − | The total amount of <code>kmemsize</code> consumable by all containers | + | The total amount of <code>kmemsize</code> consumable by all Virtual Environments |
| | in the system plus the socket buffer space (see below) is limited by the | | in the system plus the socket buffer space (see below) is limited by the |
| | hardware resources of the system. | | hardware resources of the system. |
| Line 49: |
Line 49: |
| | do not have strong negative effect on the applications, but just reduce | | do not have strong negative effect on the applications, but just reduce |
| | performance of network communications. | | performance of network communications. |
| − |
| |
| − | If you use rtorrent in a container, a low value for <code>tcpsndbuf</code> may cause rtorrent to take unusual amount of cpu. In this case, you must put a higher value. Also watch the number of failcnt in /proc/user_beancounters.
| |
| | | | |
| | <code>Tcpsndbuf</code> limits can't be set arbitrarily high. | | <code>Tcpsndbuf</code> limits can't be set arbitrarily high. |
| − | The total amount of <code>tcpsndbuf</code> consumable by all containers | + | The total amount of <code>tcpsndbuf</code> consumable by all Virtual Environments |
| | in the system plus the <code>kmemsize</code> and other socket buffers is limited | | in the system plus the <code>kmemsize</code> and other socket buffers is limited |
| | by the hardware resources of the system. | | by the hardware resources of the system. |
| Line 84: |
Line 82: |
| | | | |
| | <code>Tcprcvbuf</code> limits can't be set arbitrarily high. | | <code>Tcprcvbuf</code> limits can't be set arbitrarily high. |
| − | The total amount of <code>tcprcvbuf</code> consumable by all containers | + | The total amount of <code>tcprcvbuf</code> consumable by all Virtual Environments |
| | in the system plus the <code>kmemsize</code> and other socket buffers is limited | | in the system plus the <code>kmemsize</code> and other socket buffers is limited |
| | by the hardware resources of the system. | | by the hardware resources of the system. |
| | This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]]. | | This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]]. |
| − |
| |
| − | == othersockbuf ==
| |
| − | The total size of buffers used by local (UNIX-domain) connections between
| |
| − | processes inside the system (such as connections to a local database server)
| |
| − | and send buffers of UDP and other datagram protocols.
| |
| − |
| |
| − | <code>Othersockbuf</code> parameter depends on number of non-TCP sockets (<code>[[numothersock]]</code>).
| |
| − |
| |
| − | <code>Othersockbuf</code> configuration should satisfy
| |
| − |
| |
| − | <math>othersockbuf_{lim} - othersockbuf_{bar} \ge 2.5KB \cdot numothersock.</math>
| |
| − |
| |
| − | Increased limit for <code>othersockbuf</code> is necessary for high performance of
| |
| − | communications through local (UNIX-domain) sockets.
| |
| − | However, similarly to <code>tcpsndbuf</code>, hitting <code>othersockbuf</code> affects
| |
| − | the communication performance only and does not affect the functionality.
| |
| − |
| |
| − | <code>Othersockbuf</code> limits can't be set arbitrarily high.
| |
| − | The total amount of <code>othersockbuf</code> consumable by all containers
| |
| − | in the system plus the <code>kmemsize</code> and other socket buffers
| |
| − | is limited by the hardware resources of the system.
| |
| − | This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]].
| |
| − |
| |
| − | == dgramrcvbuf ==
| |
| − | The total size of buffers used to temporary store the incoming packets of UDP and
| |
| − | other datagram protocols.
| |
| − |
| |
| − | <code>Dgramrcvbuf</code> parameters depend on number of
| |
| − | non-TCP sockets (<code>[[numothersock]]</code>).
| |
| − |
| |
| − | <code>Dgramrcvbuf</code> limits usually don't need to be high.
| |
| − | Only if the containers needs to send and receive very large
| |
| − | datagrams, the <code>barrier</code>s for both <code>othersockbuf</code> and
| |
| − | <code>dgramrcvbuf</code> parameters should be raised.
| |
| − |
| |
| − | Hitting <code>dgramrcvbuf</code> means that some datagrams are dropped, which may
| |
| − | or may not be important for application functionality.
| |
| − | UDP is a protocol with not guaranteed delivery, so even if the buffers
| |
| − | permit, the datagrams may be as well dropped later on any stage of the
| |
| − | processing, and applications should be prepared for it.
| |
| − |
| |
| − | Unlike other socket buffer parameters, for <code>dgramrcvbuf</code>
| |
| − | the <code>barrier</code> should be set to the <code>limit</code>.
| |
| − |
| |
| − | <code>Dgramrcvbuf</code> limits can't be set arbitrarily high.
| |
| − | The total amount of <code>dgramrcvbuf</code> consumable by all containers
| |
| − | in the system plus the <code>kmemsize</code> and other socket buffers
| |
| − | is limited by the hardware resources of the system.
| |
| − | This total limit is discussed in [[UBC systemwide configuration#“Low memory”|“low memory”]].
| |
| − |
| |
| − | == oomguarpages ==
| |
| − | The guaranteed amount of memory for the case the memory is “over-booked”
| |
| − | (out-of-memory kill guarantee).
| |
| − |
| |
| − | <code>Oomguarpages</code> parameter is related to <code>[[vmguarpages]]</code>.
| |
| − | If applications start to consume more memory than the computer has,
| |
| − | the system faces an out-of-memory condition.
| |
| − | In this case the operating system will start to kill container's
| |
| − | processes to free some memory and prevent the total death
| |
| − | of the system. Although it happens very rarely in typical system loads,
| |
| − | killing processes in out-of-memory situations is a normal reaction of the
| |
| − | system, and it is built into every Linux kernel<ref>The possible reasons of out-of-memory situations are the excess of total <code>[[vmguarpages]]</code> guarantees the available physical resources or high memory consumption by system processes. Also, the kernel might allow some containers to allocate memory above their <code>[[vmguarpages]]</code> guarantees when the system had a lot of free memory, and later, when other containers claim their guarantees, the system will experience the memory shortage.</ref>.
| |
| − |
| |
| − | <code>[[Oomguarpages]]</code> parameter accounts the total amount of
| |
| − | memory and swap space used by the processes of a particular
| |
| − | container.
| |
| − | The <code>barrier</code> of the <code>oomguarpages</code> parameter is the out-of-memory
| |
| − | guarantee.
| |
| − |
| |
| − | If the current usage of memory and swap space
| |
| − | (the value of <code>oomguarpages</code>) plus the amount of used kernel memory
| |
| − | (<code>[[kmemsize]]</code>) and socket buffers is below the <code>barrier</code>,
| |
| − | processes in this container are guaranteed not to be killed in
| |
| − | out-of-memory situations.
| |
| − | If the system is in out-of-memory situation and there are several
| |
| − | containers with <code>oomguarpages</code> excess, applications in the
| |
| − | container with the biggest excess will be killed first.
| |
| − | The <code>failcnt</code> counter of <code>oomguarpages</code> parameter
| |
| − | increases when a process in this container is killed because
| |
| − | of out-of-memory situation.
| |
| − |
| |
| − | If the administrator needs to make sure that some application won't be
| |
| − | forcedly killed regardless of the application's behavior,
| |
| − | setting the <code>[[privvmpages]]</code> limit to a value not greater than the
| |
| − | <code>oomguarpages</code> guarantee significantly reduce the likelihood of
| |
| − | the application being killed,
| |
| − | and setting it to a half of the <code>oomguarpages</code> guarantee completely
| |
| − | prevents it.
| |
| − | Such configurations are not popular because they significantly reduce
| |
| − | the utilization of the hardware.
| |
| − |
| |
| − | The meaning of the <code>limit</code> for the <code>oomguarpages</code> parameter is
| |
| − | unspecified in the current version.
| |
| − |
| |
| − | The total out-of-memory guarantees given to the containers should
| |
| − | not exceed the physical capacity of the computer, as discussed in [[UBC systemwide configuration#Memory and swap space]].
| |
| − | If guarantees are given for more than the system has, in out-of-memory
| |
| − | situations applications in containers with guaranteed level of
| |
| − | service and system daemons may be killed.
| |
| − |
| |
| − | == privvmpages ==
| |
| − | Memory allocation limit in [[Memory_page|pages]] (which are typically 4096 bytes in size).
| |
| − |
| |
| − | <code>Privvmpages</code> parameter
| |
| − | allows controlling the amount of memory allocated by applications.
| |
| − |
| |
| − | The <code>barrier</code> and the <code>limit</code> of <code>privvmpages</code> parameter
| |
| − | control the upper boundary of the total size of allocated memory.
| |
| − | Note that this upper boundary doesn't guarantee that the container
| |
| − | will be able to allocate that much memory, neither does it guarantee that
| |
| − | other containers will be able to allocate their fair share of
| |
| − | memory.
| |
| − | The primary mechanism to control memory allocation is the <code>[[vmguarpages]]</code>
| |
| − | guarantee.
| |
| − |
| |
| − | <code>Privvmpages</code> parameter accounts allocated (but, possibly,
| |
| − | not used yet) memory.
| |
| − | The accounted value is an estimation how much memory will be really consumed
| |
| − | when the container's applications start to use the allocated
| |
| − | memory.
| |
| − | Consumed memory is accounted into <code>[[oomguarpages]]</code> parameter.
| |
| − |
| |
| − | Since the memory accounted into <code>privvmpages</code> may not be actually used,
| |
| − | the sum of current <code>privvmpages</code> values for all containers
| |
| − | may exceed the RAM and swap size of the computer.
| |
| − |
| |
| − | There should be a safety gap between the <code>barrier</code> and the <code>limit</code>
| |
| − | for <code>privvmpages</code> parameter to reduce the number of memory allocation
| |
| − | failures that the application is unable to handle.
| |
| − | This gap will be used for “high-priority” memory allocations, such
| |
| − | as process stack expansion.
| |
| − | Normal priority allocations will fail when the <code>barrier</code> of
| |
| − | <code>privvmpages</code> is reached.
| |
| − |
| |
| − | Total <code>privvmpages</code> should correlate with the physical resources of the
| |
| − | computer.
| |
| − | Also, it is important not to allow any container to allocate a
| |
| − | significant portion of all system RAM to avoid serious service level
| |
| − | degradation for other containers.
| |
| − | Both these configuration requirements are discussed in [[UBC systemwide configuration#Allocated memory]].
| |
| − |
| |
| − | There's also an article describing how [[user pages accounting]] works.
| |
| − |
| |
| − | == System-wide limits ==
| |
| − | All secondary parameters are related to memory.
| |
| − | Total limits on memory-related parameters must not exceed the physical
| |
| − | resources of the computer.
| |
| − | The restrictions on the configuration of memory-related parameters are listed
| |
| − | in [[UBC systemwide configuration]].
| |
| − | Those restrictions are very important, because their violation may
| |
| − | allow any container cause the whole system to hang.
| |
| − |
| |
| − | == Notes ==
| |
| − | <references/>
| |