Editing User Guide/Managing Resources

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
<noinclude>{{UG/Header}}</noinclude>
 
 
The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary.
 
The main goal of resource control in OpenVZ is to provide Service Level Management or Quality of Service (QoS) for Containers. Correctly configured resource control settings prevent serious impacts resulting from the resource over-usage (accidental or malicious) of any Container on the other Containers. Using resource control parameters for Quality of Service management also allows to enforce fairness of resource usage among Containers and better service quality for preferred CTs, if necessary.
  
Line 10: Line 9:
 
| Disk
 
| Disk
 
| This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings.
 
| This group of parameters determines disk quota in OpenVZ. The OpenVZ disk quota is realized on two levels: the per-CT level and the per-user/group level. You can turn on/off disk quota on any level and configure its settings.
| DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT, IOPRIO
+
| DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT
 
| [[#Managing Disk Quotas]]
 
| [[#Managing Disk Quotas]]
 
|-
 
|-
Line 34: Line 33:
  
 
=== What are Disk Quotas? ===
 
=== What are Disk Quotas? ===
Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Container administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.
+
Disk quotas enable system administrators to control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use. These quotas are known as per-CT quotas or first-level quotas in OpenVZ. In addition, OpenVZ enables the Virtual Private Sever administrator to limit disk space and the number of inodes that individual users and groups in that CT can use. These quotas are called per-user and per-group quotas or second-level quotas in OpenVZ.
  
 
By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.
 
By default, OpenVZ has first-level quotas enabled (which is defined in the OpenVZ global configuration file), whereas second-level quotas must be turned on for each Container separately (in the corresponding CT configuration files). It is impossible to turn on second-level disk quotas for a Container if first-level disk quotas are off for that Container.
Line 43: Line 42:
  
 
=== Disk Quota Parameters ===
 
=== Disk Quota Parameters ===
The table below summarizes the disk quota parameters that you can control. The '''File''' column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV).
+
The table below summarizes the disk quota parameters that you can control. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G), in the CT configuration files (V), or it is defined in the global configuration file but can be overridden in a separate CT configuration file (GV).
 
 
 
{| class="wikitable"
 
{| class="wikitable"
 
! Parameter !! Description !! File
 
! Parameter !! Description !! File
Line 75: Line 73:
  
 
The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:
 
The session below illustrates a scenario when first-level quotas are on by default and are turned off for Container 101:
 +
<pre>
 +
''[checking that quota is on]''
 +
# grep DISK_QUOTA /etc/vz/vz.conf
 +
DISK_QUOTA=yes
  
''[checking that quota is on]''
+
''[checking available space on /vz partition]''
# '''grep DISK_QUOTA /etc/vz/vz.conf'''
+
# df /vz
DISK_QUOTA=yes
+
Filesystem          1k-blocks      Used Available Use% Mounted on
+
/dev/sda2              8957295  1421982  7023242  17% /vz
''[checking available space on /vz partition]''
+
 
# '''df /vz'''
+
''[editing CT configuration file to add DISK_QUOTA=no]''
Filesystem          1k-blocks      Used Available Use% Mounted on
+
# vi /etc/vz/conf/101.conf
/dev/sda2              8957295  1421982  7023242  17% /vz
+
 
+
''[checking that quota is off for CT 101]''
''[editing CT configuration file to add DISK_QUOTA=no]''
+
# grep DISK_QUOTA /etc/vz/conf/101.conf
# '''vi /etc/vz/conf/101.conf'''
+
DISK_QUOTA=no
+
 
''[checking that quota is off for CT 101]''
+
# vzctl start 101
# '''grep DISK_QUOTA /etc/vz/conf/101.conf'''
+
Starting CT
DISK_QUOTA=no
+
CT is mounted
+
Adding IP address(es): 192.168.1.101
# '''vzctl start 101'''
+
Hostname for CT set: vps101.my.org
Starting CT ...
+
CT start in progress…
CT is mounted
+
# vzctl exec 101 df
Adding IP address(es): 192.168.1.101
+
Filesystem          1k-blocks      Used Available Use% Mounted on
Hostname for CT set: vps101.my.org
+
simfs                  8282373    747060  7023242  10% /
CT start in progress...
+
</pre>
# '''vzctl exec 101 df'''
 
Filesystem          1k-blocks      Used Available Use% Mounted on
 
simfs                  8282373    747060  7023242  10% /
 
  
 
As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.
 
As the above example shows, the only disk space limit a Container with the quotas turned off has is the available space and inodes on the partition where the CT private area resides.
Line 108: Line 107:
 
=== Setting Up Per-CT Disk Quota Parameters ===
 
=== Setting Up Per-CT Disk Quota Parameters ===
 
Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file:
 
Three parameters determine how much disk space and inodes a Container can use. These parameters are specified in the Container configuration file:
 
+
{| class="wikitable"
; DISKSPACE
+
| DISKSPACE
: Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter.
+
| Total size of disk space that can be consumed by the Container in 1-Kb blocks. When the space used by the Container hits the soft limit, the CT can allocate additional disk space up to the hard limit during the grace period specified by the QUOTATIME parameter.
; DISKINODES
+
|-
: Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter.
+
| DISKINODES
; QUOTATIME
+
| Total number of disk inodes (files, directories, and symbolic links) the Container can allocate. When the number of inodes used by the Container hits the soft limit, the CT can create additional file entries up to the hard limit during the grace period specified by the QUOTATIME parameter.
: The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter.
+
|-
 +
| QUOTATIME
 +
| The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes quotas for no more than the period specified by this parameter.
 +
|}
  
 
The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line.
 
The first two parameters have both soft and hard limits (or, simply, barriers and limits). The hard limit is the limit that cannot be exceeded under any circumstances. The soft limit can be exceeded up to the hard limit, but as soon as the grace period expires, the additional disk space or inodes allocations will fail. Barriers and limits are separated by colons (“:”) in Container configuration files and in the command line.
Line 120: Line 122:
 
The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:
 
The following session sets the disk space available to Container 101 to approximately 1Gb and allows the CT to allocate up to 90,000 inodes. The grace period for the quotas is set to ten minutes:
  
  # '''vzctl set 101 --diskspace 1000000:1100000 --save'''
+
  # vzctl set 101 --diskspace 1000000:1100000 --save
 
  Saved parameters for CT 101
 
  Saved parameters for CT 101
  # '''vzctl set 101 --diskinodes 90000:91000 --save'''
+
  # vzctl set 101 --diskinodes 90000:91000 --save
 
  Saved parameters for CT 101
 
  Saved parameters for CT 101
  # '''vzctl set 101 --quotatime 600 --save'''
+
  # vzctl set 101 --quotatime 600 --save
 
  Saved parameters for CT 101
 
  Saved parameters for CT 101
  # '''vzctl exec 101 df'''
+
  # vzctl exec 101 df
 
  Filesystem          1k-blocks      Used Available Use% Mounted on
 
  Filesystem          1k-blocks      Used Available Use% Mounted on
 
  simfs                  1000000    747066    252934  75% /
 
  simfs                  1000000    747066    252934  75% /
  # '''vzctl exec 101 stat -f /'''
+
  # vzctl exec 101 stat -f /
 
   File: "/"
 
   File: "/"
 
     ID: 0        Namelen: 255    Type: ext2/ext3
 
     ID: 0        Namelen: 255    Type: ext2/ext3
Line 138: Line 140:
  
 
=== Turning On and Off Second-Level Quotas for Container ===
 
=== Turning On and Off Second-Level Quotas for Container ===
 
+
The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the CT configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.
The parameter that controls the second-level disk quotas is <code>QUOTAUGIDLIMIT</code> in the Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.
 
 
 
 
If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results:
 
If you assign a non-zero value to the <code>QUOTAUGIDLIMIT</code> parameter, this action brings about the two following results:
 
 
# Second-level (per-user and per-group) disk quotas are enabled for the given Container;
 
# Second-level (per-user and per-group) disk quotas are enabled for the given Container;
# The value that you assign to this parameter will be the limit for the number of file owners and groups of this Container, including Linux system users. Note that you will theoretically be able to create extra users of this Container, but if the number of file owners inside the Container has already reached the limit, these users will not be able to own files.
+
# The value that you assign to this parameter will be the limit for the number of file owners and groups of this CT, including Linux system users. Note that you will theoretically be able to create extra users of this CT, but if the number of file owners inside the CT has already reached the limit, these users will not be able to own files.
  
Enabling per-user/group quotas for a Container requires restarting the Container. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the Container <code>/etc/passwd</code> and <code>/etc/group</code> files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.
+
Enabling per-user/group quotas for a Container requires restarting the CT. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the CT /etc/passwd and /etc/group files. Taking into account that a newly created Red Hat Linux-based CT has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.
  
 
The session below turns on second-level quotas for Container 101:
 
The session below turns on second-level quotas for Container 101:
  
  # '''vzctl set 101 --quotaugidlimit 100 --save'''
+
  # vzctl set 101 --quotaugidlimit 100 --save
 
  Unable to apply new quota values: ugid quota not initialized
 
  Unable to apply new quota values: ugid quota not initialized
 
  Saved parameters for CT 101
 
  Saved parameters for CT 101
  # '''vzctl restart 101'''
+
  # vzctl stop 101; vzctl start 101
Restarting container
+
  Stopping CT …
  Stopping container ...
+
  CT was stopped
  Container was stopped
+
  CT is unmounted
  Container is unmounted
+
  Starting CT …
  Starting container ...
+
  CT is mounted
  Container is mounted
+
  Adding IP address(es): 192.168.1.101
  Adding IP address(es): 192.168.16.123
+
  Hostname for CT set: vps101.my.org
  Setting CPU units: 1000
+
  CT start in progress…
Configure meminfo: 65536
 
File resolv.conf was modified
 
  Container start in progress...
 
  
 
=== Setting Up Second-Level Disk Quota Parameters ===
 
=== Setting Up Second-Level Disk Quota Parameters ===
In order to work with disk quotas inside a Container, you should have standard quota tools installed:
+
In order to work with disk quotas inside a CT, you should have standard quota tools installed:
  # '''vzctl exec 101 rpm -q quota'''
+
  # vzctl exec 101 rpm -q quota
 
  quota-3.12-5
 
  quota-3.12-5
  
Line 207: Line 203:
  
 
=== Checking Quota Status ===
 
=== Checking Quota Status ===
As the Hardware Node system administrator, you can check the quota status for any Container with the <code>vzquota stat</code> and <code>vzquota show</code> commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at <code>/var/vzquota/quota.''CTID''</code>) and shall be used for stopped Containers. Both commands have the same output format.
+
As the Hardware Node system administrator, you can check the quota status for any Container with the vzquota stat and vzquota show commands. The first command reports the status from the kernel and shall be used for running Containers. The second command reports the status from the quota file (located at /var/vzquota/quota.vpsid) and shall be used for stopped Containers. Both commands have the same output format.
 
 
 
The session below shows a partial output of CT 101 quota statistics:
 
The session below shows a partial output of CT 101 quota statistics:
  
Line 232: Line 227:
  
 
The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.
 
The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.
 +
If you do not need the second-level quota statistics, you can omit the –t switch from the vzquota command line.
  
If you do not need the second-level quota statistics, you can omit the <code>–t</code> switch from the <code>vzquota</code> command line.
+
== Managing CPU Share ==
 
+
The current section explains the CPU resource parameters (CPU share) that you can configure and monitor for each Container.
=== Configuring Container Disk I/O Priority Level ===
+
The table below provides the name and the description for the CPU parameters. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).
 
 
OpenVZ provides you with the capability of configuring the Container disk I/O (input/output) priority level. The higher the Container I/O priority level, the more time the Container will get for its disk I/O activities as compared to the other Containers on the Hardware Node. By default, any Container on the Hardware Node has the I/O priority level set to 4. However, you can change the current Container I/O priority level in the range from 0 to 7 using the <code>--ioprio</code> option of the <code>vzctl set</code> command. For example, you can issue the following command to set the I/O priority of Container 101 to 6:
 
 
 
# '''vzctl set 101 --ioprio 6 --save'''
 
Saved parameters for Container 101
 
 
 
To check the I/O priority level currently applied to Container 101, you can execute the following command:
 
 
 
# '''grep IOPRIO /etc/vz/conf/101.conf'''
 
IOPRIO="6"
 
 
 
The command output shows that the current I/O priority level is set to 6.
 
 
 
== Managing Container CPU resources ==
 
 
 
The current section explains the CPU resource parameters that you can configure and monitor for each Container.
 
 
 
The table below provides the name and the description for the CPU parameters. The '''File''' column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).
 
 
{| class="wikitable"
 
{| class="wikitable"
 
! Parameter
 
! Parameter
Line 260: Line 238:
 
|-
 
|-
 
| ve0cpuunits
 
| ve0cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node. After the Node is up and running, you can redefine the amount of the CPU time allocated to the Node by using the <code>--cpuunits</code> parameter with the <code>vzctl set 0</code> command.
+
| This is a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node.
 
| G
 
| G
 
|-
 
|-
 
| cpuunits
 
| cpuunits
| This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive. {{Note|In the current version of OpenVZ, you can also use this parameter to define the CPU time share for the Hardware Node.}}
+
| This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive.
 
| V
 
| V
 
|-
 
|-
 
| cpulimit
 
| cpulimit
 
| This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed.
 
| This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed.
| V
 
|-
 
| cpus
 
| The number of CPUs to be used to handle the processes running inside the corresponding Container.
 
 
| V
 
| V
 
|}
 
|}
  
=== Managing CPU Share ===
+
The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the cpulimit parameter is not defined.
 
 
The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the <code>cpulimit</code> parameter is not defined.
 
 
 
{{Note|The CPU time shares and limits are calculated on the basis of a one-second period. Thus, for example, if a Container is not allowed to receive more than 50% of the CPU time, it will be able to receive no more than half a second each second.}}
 
  
 
To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:
 
To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:
Line 304: Line 274:
  
 
Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.
 
Container 102 will receive from 2 to 4% of the Hardware Node CPU time unless the Hardware Node is overcommitted, i.e. the running Containers have been promised more CPU units than the power of the Hardware Node. In this case the CT might get less than 2 per cent.
 
{{Note|To set the <code>--cpuunits</code> parameter for the Hardware Node, you should indicate <code>0</code> as the Container ID (e.g. <code>vzctl set 0 --cpuunits 5000 --save</code>).}}
 
 
=== Configuring Number of CPUs Inside Container ===
 
 
If your Hardware Node has more than one physical processor installed, you can control the number of CPUs which will be used to handle the processes running inside separate Containers. By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node. However, you can modify the number of physical CPUs which will be simultaneously available to a Container using the <code>--cpus</code> option of the <code>vzctl set</code> command. For example, if your Hardware Node has 4 physical processors installed, i.e. any Container on the Node can make use of these 4 processors, you can set the processes inside Container 101 to be run on 2 CPUs only by issuing the following command:
 
 
# '''vzctl set 101 --cpus 2 --save'''
 
 
{{Note|The number of CPUs to be set for a Container must not exceed the number of physical CPUs installed on the Hardware Node. In this case the 'physical CPUs' notation designates the number of CPUs the OpenVZ kernel is aware of (you can view this CPU number using the <code>cat /proc/cpuinfo</code> command on the Hardware Node).}}
 
 
You can check if the number of CPUs has been successfully changed by running the cat /proc/cpuinfo command inside your Container. Assuming that you have set two physical processors to handle the processes inside Container 101, your command output may look as follows:
 
 
# '''vzctl exec 101 cat /proc/cpuinfo'''
 
processor : 0
 
vendor_id : GenuineIntel
 
cpu family : 15
 
model : 4
 
model name : Intel(R) Xeon(TM) CPU 2.80GHz
 
stepping : 1
 
cpu MHz : 2793.581
 
cache size : 1024 KB
 
...
 
processor : 1
 
vendor_id : GenuineIntel
 
cpu family : 15
 
model : 4
 
model name : Intel(R) Xeon(TM) CPU 2.80GHz
 
stepping : 1
 
cpu MHz : 2793.581
 
cache size : 1024 KB
 
...
 
 
The output shows that Container 101 is currently bound to only two processors on the Hardware Node instead of 4 available for the other Containers on this Node. It means that, from this point on, the processes of Container 101 will be simultaneously executed on no more than 2 physical CPUs while the other Containers on the Node will continue consuming the CPU time of all 4 Hardware Node processors, if needed. Please note also that the physical CPUs proper of Container 101 might not remain the same during the Container operation; they might change for load balancing reasons, the only thing that cannot be changed is their maximal number.
 
  
 
== Managing System Parameters ==
 
== Managing System Parameters ==
Line 448: Line 384:
 
|}
 
|}
  
You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding Container by means of your favorite text editor (for example, vi or emacs), or by running the <code>vzctl set</code> command. For example:
+
You can edit any of these parameters in the <code>/etc/vz/conf/''CTID''.conf</code> file of the corresponding CT by means of your favorite text editor (for example, vi or emacs), or by running the vzctl set command. For example:
  
 
  # '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
 
  # '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
Line 523: Line 459:
 
Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways:
 
Any CT is configured by means of its own configuration file. You can manage your CT configurations in a number of ways:
 
<ol>
 
<ol>
<li>Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the [[#Creating and Configuring New Container]] section). They are stored in (<code>/etc/vz/conf/</code> and have the <code>ve‑''name''.conf-sample</code> mask. Currently, the following configuration sample files are provided:
+
<li>Using configuration sample files shipped with OpenVZ. These files are used when a new Container is being created (for details, see the [[#Creating and Configuring New Container]] section). They are stored in (<code>/etc/vz/</code> and have the <code>ve‑''name''.conf-sample</code> mask. Currently, the following configuration sample files are provided:
 
* light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters;
 
* light – to be used for creating “light” CTs having restrictions on the upper limit of quality of service parameters;
* basic – to be used for common CTs.
+
* vps.basic – to be used for common CTs.
  
 
{{Note|Configuration sample files cannot contain spaces in their names.}}
 
{{Note|Configuration sample files cannot contain spaces in their names.}}
Line 531: Line 467:
 
Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:
 
Any sample configuration file may also be applied to a Container after it has been created. You would do this if, for example, you want to upgrade or downgrade the overall resources configuration of a particular CT:
  
  # '''vzctl set 101 --applyconfig light --save'''
+
  # vzctl set 101 --applyconfig light --save
  
 
This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li>
 
This command applies all the parameters from the ve‑light.conf‑sample file to the given CT, except for the OSTEMPLATE, VE_ROOT, and VE_PRIVATE parameters, should they exist in the sample configuration file.</li>
Line 543: Line 479:
 
It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below:
 
It is possible to create a Container configuration roughly representing a given fraction of the Hardware Node. If you want to create such a configuration that up to 20 fully loaded Containers would be able to be simultaneously running on the given Hardware Node, you can do it as is illustrated below:
  
  # '''cd /etc/vz/conf'''
+
  # '''cd /etc/vz/'''
 
  # '''vzsplit -n 20 -f vps.mytest'''
 
  # '''vzsplit -n 20 -f vps.mytest'''
  Config /etc/vz/conf/ve-vps.mytest.conf-sample was created
+
  Config /etc/vz/ve-vps.mytest.conf-sample was created
  # '''vzcfgvalidate /etc/vz/conf/ve-vps.mytest.conf-sample'''
+
  # '''vzcfgvalidate ve-vps.mytest.conf-sample'''
  Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 (currently, 126391)
+
  Recommendation: kmemsize.lim-kmemsize.bar should be > 253952 \
 +
(currently, 126391)
 
  Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622)
 
  Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 93622)
  
Line 554: Line 491:
 
The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above:
 
The number of Containers you can run on the Hardware Node is actually several times greater than the value specified in the command line because Containers normally do not consume all the resources that are guaranteed to them. To illustrate this idea, let us look at the Container created from the configuration produced above:
  
  # '''vzctl create 101 --ostemplate centos-5 --config vps.mytest'''
+
  # '''vzctl create 101 --ostemplate fedora-core-4 --config vps.mytest'''
 
  Creating CT private area: /vz/private/101
 
  Creating CT private area: /vz/private/101
 
  CT private area was created
 
  CT private area was created
Line 560: Line 497:
 
  Saved parameters for CT 101
 
  Saved parameters for CT 101
 
  # '''vzctl start 101'''
 
  # '''vzctl start 101'''
  Starting CT ...
+
  Starting CT
 
  CT is mounted
 
  CT is mounted
 
  Adding IP address(es): 192.168.1.101
 
  Adding IP address(es): 192.168.1.101
  CT start in progress...
+
  CT start in progress…
 
  # '''vzcalc 101'''
 
  # '''vzcalc 101'''
 
  Resource    Current(%)  Promised(%)  Max(%)
 
  Resource    Current(%)  Promised(%)  Max(%)
Line 572: Line 509:
 
=== Validating Container Configuration ===
 
=== Validating Container Configuration ===
 
The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis.
 
The system resource control parameters have complex interdependencies. Violation of these interdependencies can be catastrophic for the Container. In order to ensure that a Container does not break them, it is important to validate the CT configuration file before creating CTs on its basis.
 
+
The typical validation scenario is shown below:
Here is how to validate a CT configuration:
 
  
 
  # '''vzcfgvalidate /etc/vz/conf/101.conf'''
 
  # '''vzcfgvalidate /etc/vz/conf/101.conf'''
Line 579: Line 515:
 
  Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
 
  Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
 
  Recommendation: othersockbuf.bar should be > 132096 (currently, 122880)
 
  Recommendation: othersockbuf.bar should be > 132096 (currently, 122880)
 +
# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
 +
Saved parameters for CT 101
 +
# '''vzcfgvalidate  /etc/vz/conf/101.conf'''
 +
Recommendation: kmemsize.lim-kmemsize.bar should be > 163840 (currently, 147456)
 +
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
 +
Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)
 +
Validation completed: success
  
 
The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:
 
The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:
Line 592: Line 535:
 
|}
 
|}
  
==== Manual adjustment ====
+
In the scenario above, the first run of the vzcfgvalidate utility found a critical error for the kmemsize parameter value. After setting reasonable values for kmemsize, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration.
 
 
To fix errors or warnings reported by <code>vzcfgvalidate</code>, adjust the parameters accordingly and re-run the <code>vzcfgvalidate</code>.
 
 
 
# '''vzctl set 101 --kmemsize 2211840:2359296 --save'''
 
Saved parameters for CT 101
 
# '''vzcfgvalidate  /etc/vz/conf/101.conf'''
 
Recommendation: dgramrcvbuf.bar should be > 132096 (currently, 65536)
 
Recommendation: othersockbuf.bar should ba > 132096 (currently, 122880)t
 
Validation completed: success
 
 
 
In the scenario above, the first run of the <code>vzcfgvalidate</code> utility found a critical error for the <code>kmemsize</code> parameter value. After setting reasonable values for <code>kmemsize</code>, the resulting configuration produced only recommendations, and the Container can be safely run with this configuration.
 
 
 
==== Automatic adjustment ====
 
 
 
FIXME: vzcfgvalidate -r|-i
 
 
 
=== Applying New Configuration Sample to Container ===
 
 
 
The OpenVZ software enables you to change the configuration sample file a Container is based on and, thus, to modify all the resources the Container may consume and/or allocate at once. For example, if Container 101 is currently based on the <code>light</code> configuration sample and you are planning to run some more heavy-weight application inside the Container, you may wish to apply the <code>basic</code> sample to it instead of <code>light</code>, which will automatically adjust the necessary Container resource parameters. To this effect, you can execute the following command on the Node:
 
 
 
# '''vzctl set 101 --applyconfig basic --save'''
 
Saved parameters for CT 101
 
 
 
This command reads the resource parameters from the <code>ve-basic.conf-sample</code> file located in the <code>/etc/vz/conf</code> directory and applies them one by one to Container 101.
 
 
 
When applying new configuration samples to Containers, please keep in mind the following:
 
 
 
* All Container sample files are located in the /etc/vz/conf directory on the Hardware Node and are named according to the following pattern: <code>ve-''name''.conf-sample</code>. You should specify only the <code>''name''</code> part of the corresponding sample name after the <code>--applyconfig</code> option (<code>basic</code> in the example above).
 
* The <code>--applyconfig</code> option applies all the parameters from the specified sample file to the given Container, except for the <code>OSTEMPLATE</code>, <code>VE_ROOT</code>, <code>VE_PRIVATE</code>, <code>HOSTNAME</code>, <code>IP_ADDRESS</code>, <code>TEMPLATE</code>, <code>NETIF</code> parameters (if they exist in the sample file).
 
* You may need to restart your Container depending on the fact whether the changes for the selected parameters can be set on the fly or not. If some parameters could not be configured on the fly, you will be presented with the corresponding message informing you of this fact.
 
 
 
<noinclude>{{UG/Footer}}</noinclude>
 

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)