Changes

Jump to: navigation, search

User Guide/Managing Resources

3,710 bytes added, 13:12, 14 January 2009
VPS -> container; code formatting fixes; added --cpus description; added vzctl set 0 --cpulimit description
=== What are Disk Quotas? ===
Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Virtual Private Sever Container administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.
By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.
The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:
<pre>
''[checking that quota is on]''
# grep DISK_QUOTA /etc/vz/vz.conf
DISK_QUOTA=yes
''[checking that quota is on]'' # '''grep DISK_QUOTA /etc/vz/vz.conf''' DISK_QUOTA=yes ''[checking available space on /vz partition]'' # '''df /vz''' Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda2 8957295 1421982 7023242 17% /vz ''[editing CT configuration file to add DISK_QUOTA=no]'' # '''vi /etc/vz/conf/101.conf''' ''[checking that quota is off for CT 101]'' # '''grep DISK_QUOTA /etc/vz/conf/101.conf''' DISK_QUOTA=no # '''vzctl start 101''' Starting CT ... CT is mounted Adding IP address(es): 192.168.1.101 Hostname for CT set: vps101.my.org CT start in progress…progress... # '''vzctl exec 101 df''' Filesystem 1k-blocks Used Available Use% Mounted on simfs 8282373 747060 7023242 10% /</pre>
As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.
The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:
# '''vzctl set 101 --diskspace 1000000:1100000 --save'''
Saved parameters for CT 101
# '''vzctl set 101 --diskinodes 90000:91000 --save'''
Saved parameters for CT 101
# '''vzctl set 101 --quotatime 600 --save'''
Saved parameters for CT 101
# '''vzctl exec 101 df'''
Filesystem 1k-blocks Used Available Use% Mounted on
simfs 1000000 747066 252934 75% /
# '''vzctl exec 101 stat -f /'''
File: "/"
ID: 0 Namelen: 255 Type: ext2/ext3
=== Turning On and Off Second-Level Quotas for Container ===
 The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the CT Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas. 
If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results:
 
# Second-level (per-user and per-group) disk quotas are enabled for the given Container;
# The value that you assign to this parameter will be the limit for the number of file owners and groups of this CT, including Linux system users. Note that you will theoretically be able to create extra users of this CT, but if the number of file owners inside the CT has already reached the limit, these users will not be able to own files.
# The value that you assign to this parameter will be the limit for the number of file owners and groups of this Container, including Linux system users. Note that you will theoretically be able to create extra users of this Container, but if the number of file owners inside the Container has already reached the limit, these users will not be able to own files. Enabling per-user/group quotas for a Container requires restarting the CTContainer. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the CT Container <code>/etc/passwd </code> and <code>/etc/group </code> files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.
The session below turns on second-level quotas for Container 101:
# '''vzctl set 101 --quotaugidlimit 100 --save'''
Unable to apply new quota values: ugid quota not initialized
Saved parameters for CT 101
# '''vzctl stop 101; vzctl start 101''' Stopping CT ...
CT was stopped
CT is unmounted
Starting CT ...
CT is mounted
Adding IP address(es): 192.168.1.101
Hostname for CT set: vps101.my.org
CT start in progress…progress...
=== Setting Up Second-Level Disk Quota Parameters ===
In order to work with disk quotas inside a CTContainer, you should have standard quota tools installed: # '''vzctl exec 101 rpm -q quota'''
quota-3.12-5
=== Checking Quota Status ===
As the Hardware Node system administrator, you can check the quota status for any Container with the <code>vzquota stat </code> and <code>vzquota show </code> commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at <code>/var/vzquota/quota.vpsid''CTID''</code>) and shall be used for stopped Containers. Both commands have the same output format. 
The session below shows a partial output of CT 101 quota statistics:
The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.
If you do not need the second-level quota statistics, you can omit the –t switch from the vzquota command line.
If you do not need the second-level quota statistics, you can omit the <code>–t</code> switch from the <code>vzquota</code> command line. == Managing Container CPU Share resources == The current section explains the CPU resource parameters (CPU share) that you can configure and monitor for each Container. The table below provides the name and the description for the CPU parameters. The '''File ''' column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).
{| class="wikitable"
! Parameter
|-
| ve0cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node. After the Node is up and running, you can redefine the amount of the CPU time allocated to the Node by using the <code>--cpuunits</code> parameter with the <code>vzctl set 0</code> command.
| G
|-
| cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive.{{Note|In the current version of OpenVZ, you can also use this parameter to define the CPU time share for the Hardware Node.}}
| V
|-
| cpulimit
| This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed.
| V
|-
| cpus
| The number of CPUs to be used to handle the processes running inside the corresponding Container.
| V
|}
=== Managing CPU Share === The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the <code>cpulimit </code> parameter is not defined. {{Note|The CPU time shares and limits are calculated on the basis of a one-second period. Thus, for example, if a Container is not allowed to receive more than 50% of the CPU time, it will be able to receive no more than half a second each second.}}
To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:
Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.
 
{{Note|To set the <code>--cpuunits</code> parameter for the Hardware Node, you should indicate <code>0</code> as the Container ID (e.g. <code>vzctl set 0 --cpuunits 5000 --save</code>).}}
 
=== Configuring Number of CPUs Inside Container ===
 
If your Hardware Node has more than one physical processor installed, you can control the number of CPUs which will be used to handle the processes running inside separate Containers. By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node. However, you can modify the number of physical CPUs which will be simultaneously available to a Container using the <code>--cpus</code> option of the <code>vzctl set</code> command. For example, if your Hardware Node has 4 physical processors installed, i.e. any Container on the Node can make use of these 4 processors, you can set the processes inside Container 101 to be run on 2 CPUs only by issuing the following command:
 
# '''vzctl set 101 --cpus 2 --save'''
 
{{Note|The number of CPUs to be set for a Container must not exceed the number of physical CPUs installed on the Hardware Node. In this case the 'physical CPUs' notation designates the number of CPUs the OpenVZ kernel is aware of (you can view this CPU number using the <code>cat /proc/cpuinfo</code> command on the Hardware Node).}}
 
You can check if the number of CPUs has been successfully changed by running the cat /proc/cpuinfo command inside your Container. Assuming that you have set two physical processors to handle the processes inside Container 101, your command output may look as follows:
 
# '''vzctl exec 101 cat /proc/cpuinfo'''
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 2.80GHz
stepping : 1
cpu MHz : 2793.581
cache size : 1024 KB
...
processor : 1
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 2.80GHz
stepping : 1
cpu MHz : 2793.581
cache size : 1024 KB
...
 
The output shows that Container 101 is currently bound to only two processors on the Hardware Node instead of 4 available for the other Containers on this Node. It means that, from this point on, the processes of Container 101 will be simultaneously executed on no more than 2 physical CPUs while the other Containers on the Node will continue consuming the CPU time of all 4 Hardware Node processors, if needed. Please note also that the physical CPUs proper of Container 101 might not remain the same during the Container operation; they might change for load balancing reasons, the only thing that cannot be changed is their maximal number.
== Managing System Parameters ==
|}
You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding CT Container by means of your favorite text editor (for example, vi or emacs), or by running the <code>vzctl set </code> command. For example:
# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:
# '''vzctl set 101 --applyconfig light --save'''
This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li>

Navigation menu