Changes

Jump to: navigation, search

Resource shortage

3,618 bytes added, 19:34, 8 October 2013
m
cpuunits: intentation
vzctl set 123 --diskspace $(( 1048576*2 )):$(( 1153434*2 )) --save
</pre>
or you can use G notation for gigabyte
<pre>
vzctl set 123 --diskspace 20G:25G --save
</pre>
Here is 20 gigabyte barrier and 25 gigabyte limit diskspace saved. It can be checked by
<pre>
# vzctl exec 123 df -h
Filesystem Size Used Avail Use% Mounted on
simfs 20G 2.1G 18G 11% /
tmpfs 443M 0 443M 0% /lib/init/rw
tmpfs 443M 0 443M 0% /dev/shm
</pre>
</li>
</ol>
Applying a different diskinodes configuration is applied instantly and does not require a restart of the container.
 
You can also find the number of free inodes with
 
<pre>
# vzctl exec 123 df -i
</pre>
{{Note|shell does not support floating-point arithmetic, i.e. you can not use expressions like <code>$((&nbsp;220000*1.5&nbsp;))</code>. To use floating point, try <code>bc</code> instead, something like this: <code><nowiki>$(echo&nbsp;220000*1.5&nbsp;|&nbsp;bc)</nowiki></code>.}}
== CPU ==
There are two parameters controlling fair CPU scheduler in OpenVZ: cpuunits and cpulimit.  === cpuunits ===Cpuunits are set via<pre>vzctl set 101 --cpuunits 1000 --save</pre>For example. If you set a cpuunit for one container to a value and set a cpuunit on another container to a different value, the time allotted to each of the containers will be the ratio of the two units. Let's use a real example. We did the following:<pre>vzctl set 101 --cpuunits 1000 --savevzctl set 102 --cpuunits 2000 --savevzctl set 103 --cpuunits 3000 --save</pre> If we started a CPU intensive application on each CT, then 103 would be given 3 times as much cpu time as 101 and 102 would get twice as much as 101, but some fraction of what 103 got. Here's how to determine what the real ratios are.  Add the three units, 1000+2000+3000 = 6000 * 101 gets 1000/6000 or 1/6th of the time. (16%)* 102 gets 2000/6000 or 1/3rd of the time. (34%)* 103 gets 3000/6000 or 1/2 of the time. (50%) To summarize: those units are proportional to each other. To say it more strict, to the sum of all CTs units, plus the host system, please don't forget that one. So indeed, units of 1 1 1 1 are the same as 200 200 200 200 or 8888 8888 8888 8888. You may wonder why there's the tool vzcpucheck, which returns an absolute number called the "power of the node". The thing is, when you move a CT from one box to another, it could be problematic if you use different scales and different CPUs.  So vzcpucheck tries to work around that by inventing something called 'power of the node' which it gets from /proc/cpuinfo I guess (haven't checked it). If it shows a power of the node 10000 and you distribute that among all the CTs on the node, and then move one CT to another node which had cpuunits set in the same manner, that CT will have about the same CPU units it had on the old node. === cpulimit === The cpulimit parameter sets the absolute maximum limit for a container to a percent value. For instance:<pre>vzctl set 101 --cpulimit 25 --save</pre>says that container 101 cannot ever have more than 25 percent of a CPU even if the CPU is idle for the other 75% of the time. The limit is calculated as a percentage of a single CPU, not as a percentage of the server's CPU resources as a whole. In other words, if you have more than one CPU, you can set a cpulimit > 100. In a quad-core server, setting cpulimit to 100 permits a container to consume one entire core (and not 100% of the server). CPU limits are only available in rhel5-based and rhel6-based kernels and they behave a bit differently in them. In the rhel5 kernel the limit has a container-wide meaning. That said if you have e.g. a container of 2 CPUS with the 100% cpulimit set, this container's usage of CPUs can be 100%/0% or 50%/50% or any other values, whose sum is 100%.
{{Stub}}In the rhel6 kernel the applied limit is divided between onlince CPUs proportionally and a busy CPU cannot borrow time from an idle one. I.e. with a 2 CPUs container and 100% limit set the usage of each CPU cannot exceed 50% in any case.
[[Category: Troubleshooting]]

Navigation menu