Editing User Guide/Managing Resources
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
− | |||
The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary. | The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary. | ||
Line 10: | Line 9: | ||
| Disk | | Disk | ||
| This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings. | | This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings. | ||
− | | DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT | + | | DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT |
| [[#Managing Disk Quotas]] | | [[#Managing Disk Quotas]] | ||
|- | |- | ||
Line 34: | Line 33: | ||
=== What are Disk Quotas? === | === What are Disk Quotas? === | ||
− | Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the | + | Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Virtual Private Sever administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ. |
By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container. | By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container. | ||
Line 43: | Line 42: | ||
=== Disk Quota Parameters === | === Disk Quota Parameters === | ||
− | The table below summarizes the disk quota parameters that you can control. The | + | The table below summarizes the disk quota parameters that you can control. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV). |
− | |||
{| class="wikitable" | {| class="wikitable" | ||
! Parameter !! Description !! File | ! Parameter !! Description !! File | ||
Line 75: | Line 73: | ||
The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101: | The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101: | ||
+ | <pre> | ||
+ | ''[checking that quota is on]'' | ||
+ | # grep DISK_QUOTA /etc/vz/vz.conf | ||
+ | DISK_QUOTA=yes | ||
− | + | ''[checking available space on /vz partition]'' | |
− | + | # df /vz | |
− | + | Filesystem 1k-blocks Used Available Use% Mounted on | |
− | + | /dev/sda2 8957295 1421982 7023242 17% /vz | |
− | + | ||
− | + | ''[editing CT configuration file to add DISK_QUOTA=no]'' | |
− | + | # vi /etc/vz/conf/101.conf | |
− | + | ||
− | + | ''[checking that quota is off for CT 101]'' | |
− | + | # grep DISK_QUOTA /etc/vz/conf/101.conf | |
− | + | DISK_QUOTA=no | |
− | + | ||
− | + | # vzctl start 101 | |
− | + | Starting CT … | |
− | + | CT is mounted | |
− | + | Adding IP address(es): 192.168.1.101 | |
− | + | Hostname for CT set: vps101.my.org | |
− | + | CT start in progress… | |
− | + | # vzctl exec 101 df | |
− | + | Filesystem 1k-blocks Used Available Use% Mounted on | |
− | + | simfs 8282373 747060 7023242 10% / | |
− | + | </pre> | |
− | |||
− | |||
− | |||
As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides. | As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides. | ||
Line 108: | Line 107: | ||
=== Setting Up Per-CT Disk Quota Parameters === | === Setting Up Per-CT Disk Quota Parameters === | ||
Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file: | Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file: | ||
− | + | {| class="wikitable" | |
− | + | | DISKSPACE | |
− | + | | Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter. | |
− | + | |- | |
− | + | | DISKINODES | |
− | + | | Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter. | |
− | + | |- | |
+ | | QUOTATIME | ||
+ | | The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter. | ||
+ | |} | ||
The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line. | The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line. | ||
Line 120: | Line 122: | ||
The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes: | The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes: | ||
− | # | + | # vzctl set 101 --diskspace 1000000:1100000 --save |
Saved parameters for CT 101 | Saved parameters for CT 101 | ||
− | # | + | # vzctl set 101 --diskinodes 90000:91000 --save |
Saved parameters for CT 101 | Saved parameters for CT 101 | ||
− | # | + | # vzctl set 101 --quotatime 600 --save |
Saved parameters for CT 101 | Saved parameters for CT 101 | ||
− | # | + | # vzctl exec 101 df |
Filesystem 1k-blocks Used Available Use% Mounted on | Filesystem 1k-blocks Used Available Use% Mounted on | ||
simfs 1000000 747066 252934 75% / | simfs 1000000 747066 252934 75% / | ||
− | # | + | # vzctl exec 101 stat -f / |
File: "/" | File: "/" | ||
ID: 0 Namelen: 255 Type: ext2/ext3 | ID: 0 Namelen: 255 Type: ext2/ext3 | ||
Blocks: Total: 1000000 Free: 252934 Available: 252934 Size: 1024 | Blocks: Total: 1000000 Free: 252934 Available: 252934 Size: 1024 | ||
Inodes: Total: 90000 Free: 9594 | Inodes: Total: 90000 Free: 9594 | ||
− | + | It is possible to change the first-level disk quota parameters for a running Container. The changes will take effect immediately. If you do not want your changes to persist till the next Container startup, do not use the –-save switch. | |
− | It is possible to change the first-level disk quota parameters for a running Container. The changes will take effect immediately. If you do not want your changes to persist till the next Container startup, do not use the | ||
=== Turning On and Off Second-Level Quotas for Container === | === Turning On and Off Second-Level Quotas for Container === | ||
− | + | The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the CT configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas. | |
− | The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the | ||
− | |||
If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results: | If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results: | ||
− | |||
# Second-level (per-user and per-group) disk quotas are enabled for the given Container; | # Second-level (per-user and per-group) disk quotas are enabled for the given Container; | ||
− | # The value that you assign to this parameter will be the limit for the number of file owners and groups of this | + | # The value that you assign to this parameter will be the limit for the number of file owners and groups of this CT, including Linux system users. Note that you will theoretically be able to create extra users of this CT, but if the number of file owners inside the CT has already reached the limit, these users will not be able to own files. |
− | Enabling per-user/group quotas for a Container requires restarting the | + | Enabling per-user/group quotas for a Container requires restarting the CT. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the CT /etc/passwd and /etc/group files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased. |
The session below turns on second-level quotas for Container 101: | The session below turns on second-level quotas for Container 101: | ||
− | # | + | # vzctl set 101 --quotaugidlimit 100 --save |
Unable to apply new quota values: ugid quota not initialized | Unable to apply new quota values: ugid quota not initialized | ||
Saved parameters for CT 101 | Saved parameters for CT 101 | ||
− | # | + | # vzctl stop 101; vzctl start 101 |
− | + | Stopping CT … | |
− | Stopping | + | CT was stopped |
− | + | CT is unmounted | |
− | + | Starting CT … | |
− | Starting | + | CT is mounted |
− | + | Adding IP address(es): 192.168.1.101 | |
− | Adding IP address(es): 192.168. | + | Hostname for CT set: vps101.my.org |
− | + | CT start in progress… | |
− | |||
− | |||
− | |||
=== Setting Up Second-Level Disk Quota Parameters === | === Setting Up Second-Level Disk Quota Parameters === | ||
− | In order to work with disk quotas inside a | + | In order to work with disk quotas inside a CT, you should have standard quota tools installed: |
− | # | + | # vzctl exec 101 rpm -q quota |
quota-3.12-5 | quota-3.12-5 | ||
Line 207: | Line 202: | ||
=== Checking Quota Status === | === Checking Quota Status === | ||
− | As the Hardware Node system administrator, you can check the quota status for any Container with the | + | As the Hardware Node system administrator, you can check the quota status for any Container with the vzquota stat and vzquota show commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at /var/vzquota/quota.vpsid) and shall be used for stopped Containers. Both commands have the same output format. |
− | |||
The session below shows a partial output of CT 101 quota statistics: | The session below shows a partial output of CT 101 quota statistics: | ||
Line 232: | Line 226: | ||
The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system. | The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system. | ||
+ | If you do not need the second-level quota statistics, you can omit the –t switch from the vzquota command line. | ||
− | + | == Managing CPU Share == | |
− | + | The current section explains the CPU resource parameters (CPU share) that you can configure and monitor for each Container. | |
− | + | The table below provides the name and the description for the CPU parameters. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V). | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | == Managing | ||
− | |||
− | The current section explains the CPU resource parameters that you can configure and monitor for each Container. | ||
− | |||
− | The table below provides the name and the description for the CPU parameters. The | ||
{| class="wikitable" | {| class="wikitable" | ||
! Parameter | ! Parameter | ||
Line 260: | Line 237: | ||
|- | |- | ||
| ve0cpuunits | | ve0cpuunits | ||
− | | This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node | + | | This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node. |
| G | | G | ||
|- | |- | ||
| cpuunits | | cpuunits | ||
− | | This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive. | + | | This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive. |
| V | | V | ||
|- | |- | ||
| cpulimit | | cpulimit | ||
| This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed. | | This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed. | ||
− | |||
− | |||
− | |||
− | |||
| V | | V | ||
|} | |} | ||
− | + | The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the cpulimit parameter is not defined. | |
− | |||
− | The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the | ||
− | |||
− | |||
To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization: | To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization: | ||
Line 304: | Line 273: | ||
Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent. | Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Managing System Parameters == | == Managing System Parameters == | ||
Line 448: | Line 383: | ||
|} | |} | ||
− | You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding | + | You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding CT by means of your favorite text editor (for example, vi or emacs), or by running the vzctl set command. For example: |
# '''vzctl set 101 --kmemsize 2211840:2359296 --save''' | # '''vzctl set 101 --kmemsize 2211840:2359296 --save''' | ||
Line 523: | Line 458: | ||
Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways: | Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways: | ||
<ol> | <ol> | ||
− | <li>Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the [[#Creating and Configuring New Container]] section). They are stored in (<code>/etc/vz | + | <li>Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the [[#Creating and Configuring New Container]] section). They are stored in (<code>/etc/vz/</code> and have the <code>ve‑''name''.conf-sample</code> mask. Currently, the following configuration sample files are provided: |
* light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters; | * light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters; | ||
− | * basic – to be used for common CTs. | + | * vps.basic – to be used for common CTs. |
{{Note|Configuration sample files cannot contain spaces in their names.}} | {{Note|Configuration sample files cannot contain spaces in their names.}} | ||
Line 531: | Line 466: | ||
Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT: | Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT: | ||
− | # | + | # vzctl set 101 --applyconfig light --save |
This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li> | This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li> | ||
Line 543: | Line 478: | ||
It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below: | It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below: | ||
− | # '''cd /etc/vz/ | + | # '''cd /etc/vz/''' |
# '''vzsplit -n 20 -f vps.mytest''' | # '''vzsplit -n 20 -f vps.mytest''' | ||
− | Config /etc/vz | + | Config /etc/vz/ve-vps.mytest.conf-sample was created |
− | # '''vzcfgvalidate | + | # '''vzcfgvalidate ve-vps.mytest.conf-sample''' |
− | Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 (currently, 126391) | + | Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 \ |
+ | (currently, 126391) | ||
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622) | Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622) | ||
Line 554: | Line 490: | ||
The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above: | The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above: | ||
− | # '''vzctl create 101 --ostemplate | + | # '''vzctl create 101 --ostemplate fedora-core-4 --config vps.mytest''' |
Creating CT private area: /vz/private/101 | Creating CT private area: /vz/private/101 | ||
CT private area was created | CT private area was created | ||
Line 560: | Line 496: | ||
Saved parameters for CT 101 | Saved parameters for CT 101 | ||
# '''vzctl start 101''' | # '''vzctl start 101''' | ||
− | Starting CT | + | Starting CT … |
CT is mounted | CT is mounted | ||
Adding IP address(es): 192.168.1.101 | Adding IP address(es): 192.168.1.101 | ||
− | CT start in | + | CT start in progress… |
# '''vzcalc 101''' | # '''vzcalc 101''' | ||
Resource Current(%) Promised(%) Max(%) | Resource Current(%) Promised(%) Max(%) | ||
Line 572: | Line 508: | ||
=== Validating Container Configuration === | === Validating Container Configuration === | ||
The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis. | The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis. | ||
− | + | The typical validation scenario is shown below: | |
− | |||
# '''vzcfgvalidate /etc/vz/conf/101.conf''' | # '''vzcfgvalidate /etc/vz/conf/101.conf''' | ||
Line 579: | Line 514: | ||
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536) | Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536) | ||
Recommendation: othersockbuf.bar should be > 132096 (currently, 122880) | Recommendation: othersockbuf.bar should be > 132096 (currently, 122880) | ||
+ | # '''vzctl set 101 --kmemsize 2211840:2359296 --save''' | ||
+ | Saved parameters for CT 101 | ||
+ | # '''vzcfgvalidate /etc/vz/conf/101.conf''' | ||
+ | Recommendation: kmemsize.lim-kmemsize.bar should be > 163840 (currently, 147456) | ||
+ | Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536) | ||
+ | Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880) | ||
+ | Validation completed: success | ||
The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity: | The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity: | ||
Line 592: | Line 534: | ||
|} | |} | ||
− | + | In the scenario above, the first run of the vzcfgvalidate utility found a critical error for the kmemsize parameter value. After setting reasonable values for kmemsize, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | In the scenario above, the first run of the | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |