UBC primary parameters

From OpenVZ Virtuozzo Containers Wiki
(Redirected from Numtcpsock)
Jump to: navigation, search
User Beancounters
Definition
/proc/user_beancounters
/proc/bc/
General information
Units of measurement
VSwap
Parameters description
Primary parameters
numproc, numtcpsock, numothersock, vmguarpages
Secondary parameters
kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, privvmpages
Auxiliary parameters
lockedpages, shmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent, swappages
Internals
User pages accounting
RSS fractions accounting
On-demand accounting
UBC consistency
Consistency formulae
System-wide configuration
vzubc(8)
Configuration examples
Basic
Derived
Intermediate configurations
Tables
List of parameters
Parameter properties
Consistency
Config examples

The most important parameters determining the resources available to container are explained below. The meaning of the parameters is illustrated assuming that the container runs some network server applications.

numproc[edit]

Maximum number of processes and kernel-level threads allowed for this container.

Many server applications (like Apache Web server, FTP and mail servers) spawn a process to handle each client, so the limit on number of processes defines how many clients the application will be able to handle in parallel. However, the number of processes doesn't limit how “heavy” the application is and whether the server will be able to serve heavy requests from clients.

Configuring resource control system, it is important to estimate both the maximum number of processes and the average number of processes (referred to as avnumproc later). Other (dependent) resource control parameters may depend both on the limit and the average number (see UBC consistency check).

The barrier of numproc doesn't provide additional control and should be set equal to the limit.

There is a restriction on the total number of processes in the system. More than about 16000 processes start to cause poor responsiveness of the system, worsening when the number grows. Total number of processes exceeding 32000 is very likely to cause hang of the system.

Note that in practice the number of processes is usually less. Each process consumes some memory, and the available memory and the "low memory" (see “Low memory”) limit the number of processes to lower values. With typical processes, it is normal to be able to run only up to 8000 processes in a system.

numtcpsock[edit]

Maximum number of TCP sockets.

This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel.

The barrier of this parameter should be set equal to the limit.

If each container has it's own set of IP addresses (which is the only way a OpenVZ system can be configured), there are no direct limits on the total number of TCP sockets in the system. The number of sockets needs to be controlled because each socket needs certain amount of memory for receive and transmit buffers (see descriptions of tcpsndbuf and tcprcvbuf), and the memory is a limited resource.

numothersock[edit]

Maximum number of non-TCP sockets (local sockets, UDP and other types of sockets).

Local (UNIX-domain) sockets are used for communications inside the system. Multi-tier applications (for example, a Web application with a database server as a back-end) may need one or multiple local sockets to serve each client. Straightforward applications (for example, most mail servers, with the exception of Postfix) do not use local sockets.

UDP sockets are used for Domain Name Service (DNS) queries, but the number of such sockets opened simultaneously is low. UDP and other sockets may also be used in some very special applications (SNMP agents and others).

The barrier of this parameter should be set equal to the limit. The number of local sockets in a system is not limited. The number of UDP sockets in a system, similarly to TCP sockets, is not limited in OpenVZ systems.

Similarly to numtcpsock parameter discussed above, the number of non-TCP sockets needs to be controlled because each socket needs certain amount of memory for its buffers, and the memory is a limited resource.

vmguarpages[edit]

Memory allocation guarantee.

This parameter controls how much memory is available to the Virtual Environment (i.e. how much memory its applications can allocate by malloc(3) or other standard Linux memory allocation mechanisms). The more clients are served or the more “heavy” the application is, the more memory it needs.

The amount of memory that container's applications are guaranteed to be able to allocate is specified as the barrier of vmguarpages parameter. The current amount of allocated memory space is accounted into privvmpages parameter, and vmguarpages parameter does not have its own accounting. The barrier and the limit of privvmpages parameter impose an upper limit on the memory allocations (see privvmpages). The meaning of the limit for the vmguarpages parameter is unspecified in the current version and should be set to the maximal allowed value (LONG_MAX).

If the current amount of allocated memory space does not exceed the guaranteed amount (the barrier of vmguarpages), memory allocations of container's applications always succeed. If the current amount of allocated memory space exceeds the guarantee but below the barrier of privvmpages, allocations may or may not succeed, depending on the total amount of available memory in the system.

Starting from the barrier of privvmpages, normal priority allocations and, starting from the limit of privvmpages, all memory allocations made by the applications fail. The memory allocation guarantee (vmguarpages) is a primary tool for controlling the memory available to containers, because it allows administrators to provide Service Level Agreements — agreements guaranteeing certain quality of service, certain amount of resources and general availability of the service. The unit of measurement of vmguarpages values is memory pages (4KB on x86 and x86_64 processors). The total memory allocation guarantees given to containers are limited by the physical resources of the computer — the size of RAM and the swap space — as discussed in UBC systemwide configuration.

There is a pseudo-graphical tool - vzmem - which allows you to distribute physical memory among all VEs consistently. It shows all physical memory blocks graphically in /etc/vz/conf/MEM-MAP text file and lets you to move these blocks from one VE to another to redistribute the memory. Also you may specify "additional" memory personally for each VE: such memory will be obtained from system's free memory or swap (it is reflected as modifying of privvmpages parameter).