Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

User Guide/Managing Resources

7,065 bytes added, 15:29, 29 May 2009
Validating Container Configuration: added manual/automatic adj
<noinclude>{{UG/Header}}</noinclude>
The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary.
| Disk
| This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings.
| DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT, IOPRIO
| [[#Managing Disk Quotas]]
|-
=== What are Disk Quotas? ===
Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Virtual Private Sever Container administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.
By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.
=== Disk Quota Parameters ===
The table below summarizes the disk quota parameters that you can control. The '''File ''' column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV). 
{| class="wikitable"
! Parameter !! Description !! File
The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:
<pre>
''[checking that quota is on]''
# grep DISK_QUOTA /etc/vz/vz.conf
DISK_QUOTA=yes
''[checking that quota is on]'' # '''grep DISK_QUOTA /etc/vz/vz.conf''' DISK_QUOTA=yes ''[checking available space on /vz partition]'' # '''df /vz''' Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda2 8957295 1421982 7023242 17% /vz ''[editing CT configuration file to add DISK_QUOTA=no]'' # '''vi /etc/vz/conf/101.conf''' ''[checking that quota is off for CT 101]'' # '''grep DISK_QUOTA /etc/vz/conf/101.conf''' DISK_QUOTA=no # '''vzctl start 101''' Starting CT ... CT is mounted Adding IP address(es): 192.168.1.101 Hostname for CT set: vps101.my.org CT start in progress…progress... # '''vzctl exec 101 df''' Filesystem 1k-blocks Used Available Use% Mounted on simfs 8282373 747060 7023242 10% /</pre>
As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.
=== Setting Up Per-CT Disk Quota Parameters ===
Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file:
{| class="wikitable"| ; DISKSPACE| : Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter.|-| ; DISKINODES| : Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter.|-| ; QUOTATIME| : The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter.|}
The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line.
The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:
# '''vzctl set 101 --diskspace 1000000:1100000 --save'''
Saved parameters for CT 101
# '''vzctl set 101 --diskinodes 90000:91000 --save'''
Saved parameters for CT 101
# '''vzctl set 101 --quotatime 600 --save'''
Saved parameters for CT 101
# '''vzctl exec 101 df'''
Filesystem 1k-blocks Used Available Use% Mounted on
simfs 1000000 747066 252934 75% /
# '''vzctl exec 101 stat -f /'''
File: "/"
ID: 0 Namelen: 255 Type: ext2/ext3
=== Turning On and Off Second-Level Quotas for Container ===
 The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the CT Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas. 
If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results:
 
# Second-level (per-user and per-group) disk quotas are enabled for the given Container;
# The value that you assign to this parameter will be the limit for the number of file owners and groups of this CTContainer, including Linux system users. Note that you will theoretically be able to create extra users of this CTContainer, but if the number of file owners inside the CT Container has already reached the limit, these users will not be able to own files.
Enabling per-user/group quotas for a Container requires restarting the CTContainer. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the CT Container <code>/etc/passwd </code> and <code>/etc/group </code> files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.
The session below turns on second-level quotas for Container 101:
# '''vzctl set 101 --quotaugidlimit 100 --save'''
Unable to apply new quota values: ugid quota not initialized
Saved parameters for CT 101
# '''vzctl stop 101; vzctl start restart 101''' Restarting container Stopping CT …container ... CT Container was stopped CT Container is unmounted Starting CT …container ... CT Container is mounted Adding IP address(es): 192.168.116.101123 Setting CPU units: 1000 Hostname for CT setConfigure meminfo: vps10165536 File resolv.my.orgconf was modified CT Container start in progress…progress...
=== Setting Up Second-Level Disk Quota Parameters ===
In order to work with disk quotas inside a CTContainer, you should have standard quota tools installed: # '''vzctl exec 101 rpm -q quota'''
quota-3.12-5
=== Checking Quota Status ===
As the Hardware Node system administrator, you can check the quota status for any Container with the <code>vzquota stat </code> and <code>vzquota show </code> commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at <code>/var/vzquota/quota.vpsid''CTID''</code>) and shall be used for stopped Containers. Both commands have the same output format. 
The session below shows a partial output of CT 101 quota statistics:
The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.
If you do not need the second-level quota statistics, you can omit the –t switch from the vzquota command line.
If you do not need the second-level quota statistics, you can omit the <code>–t</code> switch from the <code>vzquota</code> command line. === Configuring Container Disk I/O Priority Level === OpenVZ provides you with the capability of configuring the Container disk I/O (input/output) priority level. The higher the Container I/O priority level, the more time the Container will get for its disk I/O activities as compared to the other Containers on the Hardware Node. By default, any Container on the Hardware Node has the I/O priority level set to 4. However, you can change the current Container I/O priority level in the range from 0 to 7 using the <code>--ioprio</code> option of the <code>vzctl set</code> command. For example, you can issue the following command to set the I/O priority of Container 101 to 6:  # '''vzctl set 101 --ioprio 6 --save''' Saved parameters for Container 101 To check the I/O priority level currently applied to Container 101, you can execute the following command:  # '''grep IOPRIO /etc/vz/conf/101.conf''' IOPRIO="6" The command output shows that the current I/O priority level is set to 6. == Managing Container CPU Share resources == The current section explains the CPU resource parameters (CPU share) that you can configure and monitor for each Container. The table below provides the name and the description for the CPU parameters. The '''File ''' column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).
{| class="wikitable"
! Parameter
|-
| ve0cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node. After the Node is up and running, you can redefine the amount of the CPU time allocated to the Node by using the <code>--cpuunits</code> parameter with the <code>vzctl set 0</code> command.
| G
|-
| cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive.{{Note|In the current version of OpenVZ, you can also use this parameter to define the CPU time share for the Hardware Node.}}
| V
|-
| cpulimit
| This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed.
| V
|-
| cpus
| The number of CPUs to be used to handle the processes running inside the corresponding Container.
| V
|}
=== Managing CPU Share === The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the <code>cpulimit </code> parameter is not defined. {{Note|The CPU time shares and limits are calculated on the basis of a one-second period. Thus, for example, if a Container is not allowed to receive more than 50% of the CPU time, it will be able to receive no more than half a second each second.}}
To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:
Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.
 
{{Note|To set the <code>--cpuunits</code> parameter for the Hardware Node, you should indicate <code>0</code> as the Container ID (e.g. <code>vzctl set 0 --cpuunits 5000 --save</code>).}}
 
=== Configuring Number of CPUs Inside Container ===
 
If your Hardware Node has more than one physical processor installed, you can control the number of CPUs which will be used to handle the processes running inside separate Containers. By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node. However, you can modify the number of physical CPUs which will be simultaneously available to a Container using the <code>--cpus</code> option of the <code>vzctl set</code> command. For example, if your Hardware Node has 4 physical processors installed, i.e. any Container on the Node can make use of these 4 processors, you can set the processes inside Container 101 to be run on 2 CPUs only by issuing the following command:
 
# '''vzctl set 101 --cpus 2 --save'''
 
{{Note|The number of CPUs to be set for a Container must not exceed the number of physical CPUs installed on the Hardware Node. In this case the 'physical CPUs' notation designates the number of CPUs the OpenVZ kernel is aware of (you can view this CPU number using the <code>cat /proc/cpuinfo</code> command on the Hardware Node).}}
 
You can check if the number of CPUs has been successfully changed by running the cat /proc/cpuinfo command inside your Container. Assuming that you have set two physical processors to handle the processes inside Container 101, your command output may look as follows:
 
# '''vzctl exec 101 cat /proc/cpuinfo'''
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 2.80GHz
stepping : 1
cpu MHz : 2793.581
cache size : 1024 KB
...
processor : 1
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 2.80GHz
stepping : 1
cpu MHz : 2793.581
cache size : 1024 KB
...
 
The output shows that Container 101 is currently bound to only two processors on the Hardware Node instead of 4 available for the other Containers on this Node. It means that, from this point on, the processes of Container 101 will be simultaneously executed on no more than 2 physical CPUs while the other Containers on the Node will continue consuming the CPU time of all 4 Hardware Node processors, if needed. Please note also that the physical CPUs proper of Container 101 might not remain the same during the Container operation; they might change for load balancing reasons, the only thing that cannot be changed is their maximal number.
== Managing System Parameters ==
|}
You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding CT Container by means of your favorite text editor (for example, vi or emacs), or by running the <code>vzctl set </code> command. For example:
# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways:
<ol>
<li>Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the [[#Creating and Configuring New Container]] section). They are stored in (<code>/etc/vz/conf/</code> and have the <code>ve‑''name''.conf-sample</code> mask. Currently, the following configuration sample files are provided:
* light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters;
* vps.basic – to be used for common CTs.
{{Note|Configuration sample files cannot contain spaces in their names.}}
Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:
# '''vzctl set 101 --applyconfig light --save'''
This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li>
It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below:
# '''cd /etc/vz/conf'''
# '''vzsplit -n 20 -f vps.mytest'''
Config /etc/vz/conf/ve-vps.mytest.conf-sample was created # '''vzcfgvalidate /etc/vz/conf/ve-vps.mytest.conf-sample''' Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 \ (currently, 126391)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622)
The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above:
# '''vzctl create 101 --ostemplate fedoracentos-core-4 5 --config vps.mytest'''
Creating CT private area: /vz/private/101
CT private area was created
Saved parameters for CT 101
# '''vzctl start 101'''
Starting CT ...
CT is mounted
Adding IP address(es): 192.168.1.101
CT start in progress…progress...
# '''vzcalc 101'''
Resource Current(%) Promised(%) Max(%)
=== Validating Container Configuration ===
The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis.
The typical validation scenario Here is shown belowhow to validate a CT configuration:
# '''vzcfgvalidate /etc/vz/conf/101.conf'''
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
Recommendation: othersockbuf.bar should be > 132096 (currently, 122880)
# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
Saved parameters for CT 101
# '''vzcfgvalidate /etc/vz/conf/101.conf'''
Recommendation: kmemsize.lim-kmemsize.bar should be > 163840 (currently, 147456)
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)
Validation completed: success
The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:
|}
==== Manual adjustment ==== To fix errors or warnings reported by <code>vzcfgvalidate</code>, adjust the parameters accordingly and re-run the <code>vzcfgvalidate</code>.  # '''vzctl set 101 --kmemsize 2211840:2359296 --save''' Saved parameters for CT 101 # '''vzcfgvalidate /etc/vz/conf/101.conf''' Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536) Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)t Validation completed: success In the scenario above, the first run of the <code>vzcfgvalidate </code> utility found a critical error for the <code>kmemsize </code> parameter value. After setting reasonable values for <code>kmemsize</code>, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration. ==== Automatic adjustment ==== FIXME: vzcfgvalidate -r|-i === Applying New Configuration Sample to Container === The OpenVZ software enables you to change the configuration sample file a Container is based on and, thus, to modify all the resources the Container may consume and/or allocate at once. For example, if Container 101 is currently based on the <code>light</code> configuration sample and you are planning to run some more heavy-weight application inside the Container, you may wish to apply the <code>basic</code> sample to it instead of <code>light</code>, which will automatically adjust the necessary Container resource parameters. To this effect, you can execute the following command on the Node:  # '''vzctl set 101 --applyconfig basic --save''' Saved parameters for CT 101 This command reads the resource parameters from the <code>ve-basic.conf-sample</code> file located in the <code>/etc/vz/conf</code> directory and applies them one by one to Container 101. When applying new configuration samples to Containers, please keep in mind the following: * All Container sample files are located in the /etc/vz/conf directory on the Hardware Node and are named according to the following pattern: <code>ve-''name''.conf-sample</code>. You should specify only the <code>''name''</code> part of the corresponding sample name after the <code>--applyconfig</code> option (<code>basic</code> in the example above).* The <code>--applyconfig</code> option applies all the parameters from the specified sample file to the given Container, except for the <code>OSTEMPLATE</code>, <code>VE_ROOT</code>, <code>VE_PRIVATE</code>, <code>HOSTNAME</code>, <code>IP_ADDRESS</code>, <code>TEMPLATE</code>, <code>NETIF</code> parameters (if they exist in the sample file).* You may need to restart your Container depending on the fact whether the changes for the selected parameters can be set on the fly or not. If some parameters could not be configured on the fly, you will be presented with the corresponding message informing you of this fact. <noinclude>{{UG/Footer}}</noinclude>