Editing Demo scripts

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
The following demo scripts (scenarios) can be used to show advantages of OpenVZ.
+
Demo scripts which can be used to show advantages of OpenVZ:
  
== Full container lifecycle ==
+
* Full VE lifecycle (create, set ip, start, add user, enter, exec, show ps -axf output inside VE), stop, destroy). It should take two minutes ("compare that to a time you need to deploy a new (non-virtual) server!")
  
Create a container, set an IP, start, add a user, enter, exec, show
+
* Massive VE creation. Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.
<code>ps -axf</code> output inside the container, stop, and destroy.
 
It should take about two minutes (''"compare that to a time you need
 
to deploy a new (non-virtual) server!"''). During the demonstration,
 
describe what's happening and why.
 
  
Here are the example commands needed:
+
* Use VEs from prev. item -- load those by ab or http_load -- shows that many VE are working quite fine, with low response time etc.
  
# CT=123
+
* If you have two boxes, do vzmigrate --online from one box to another. You can use, say, xvnc in a VE and vncclient to connect to it, then run xscreensaver-demo and while the picture is moving do a live migration. You'll show xscreensaver stalls for a few seconds but then keeps running -- on another machine! That looks amazing, to say at least.
# IP=10.1.1.123
 
# sed -i "/$IP /d" ~/.ssh/
 
# time vzctl create $CT --ostemplate fedora-core-5-i386-default
 
# vzctl set $CT --ipadd $IP --hostname newCT --save
 
# vzctl start $CT
 
# vzctl exec $CT ps axf
 
# vzctl set $CT --userpasswd guest:secret --save
 
# ssh guest@$IP
 
[newCT]# ps axf
 
[newCT]# logout
 
# vzctl stop $CT
 
# vzctl destroy $CT
 
 
 
== Massive container creation ==
 
 
 
Create/start 50 or 100 containers in a shell loop. Shows fast deployment
 
and high density.
 
 
 
Here are the example commands needed:
 
 
 
<pre>
 
# time for ((CT=200; CT<250; CT++)); do \
 
>  time vzctl create $CT --ostemplate fedora-core-9-i386; \
 
>  vzctl start $CT; \
 
> done
 
</pre>
 
 
 
== Massive container load ==
 
 
 
Use containers from the previous item — load those by <code>ab</code> or
 
<code>http_load</code>. This demo shows that multiple containers are working
 
just fine, with low response time etc.
 
 
 
<pre>
 
# for ((CT=200; CT<250; CT++)); do \
 
>  vzctl set $CT --ipadd 10.1.1.$CT --save; \
 
> done
 
</pre>
 
 
 
On another machine:
 
 
 
<pre>
 
# rpm -ihv http_load
 
#
 
</pre>
 
FIXME: http_load commands
 
 
 
== Live migration ==
 
 
 
If you have two boxes, do <code>vzmigrate --online</code> from one box
 
to another. You can use, say, <code>xvnc</code> in a container and
 
<code>vncclient</code> to connect to it, then run
 
<code>xscreensaver-demo</code>, choose a suitable screensaver (eye-candy but
 
not too CPU aggressive) and while the picture is moving start a live
 
migration. You'll see that <code>xscreensaver</code> stalls for a few
 
seconds but then continues to run — on another machine! That looks amazing,
 
to say at least.
 
 
 
FIXME: commands, setup, VNC template.
 
 
 
== Resource management ==
 
Below scenarios aims to show how OpenVZ resource management works.
 
 
 
=== [[UBC]] protection ===
 
 
 
==== fork() bomb ====
 
<pre>
 
# while [ true ]; do \
 
>    while [ true ]; do \
 
>        echo " " > /dev/null;
 
>    done &
 
> done
 
</pre>
 
 
 
We can see that the number of processes inside container will not be growing.
 
We will see only the increase of <code>numproc</code> and/or
 
<code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
 
 
 
==== dentry cache eat up ====
 
FIXME
 
 
 
=== CPU scheduler ===
 
 
 
{{Warning|CPU weights only works in stable kernels.}}
 
 
 
Create 3 containers:
 
<pre>
 
# vzctl create 101
 
# vzctl create 102
 
# vzctl create 103
 
</pre>
 
 
 
Set container weights:
 
<pre>
 
# vzctl set 101 --cpuunits 1000 --save
 
# vzctl set 102 --cpuunits 2000 --save
 
# vzctl set 103 --cpuunits 3000 --save
 
</pre>
 
 
 
We set next CPU sharing <code>CT101 : CT102 : CT103 = 1 : 2 : 3</code>
 
 
 
Start containers:
 
<pre>
 
# vzctl start 101
 
# vzctl start 102
 
# vzctl start 103
 
</pre>
 
 
 
Run busy loops in all containers:
 
<pre>
 
# vzctl enter 101
 
[ve101]# while [ true ]; do true; done
 
# vzctl enter 102
 
[ve102]# while [ true ]; do true; done
 
# vzctl enter 103
 
[ve103]# while [ true ]; do true; done
 
</pre>
 
 
 
Check in top that sharing works:
 
<pre>
 
# top
 
COMMAND    %CPU
 
bash      48.0
 
bash      34.0
 
bash      17.5
 
</pre>
 
 
 
So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.
 
 
 
Now start some more busy loops. CPU distribution should remain the same.
 
 
 
=== Disk quota ===
 
<pre>
 
# vzctl set CTID --diskspace 1048576:1153434 --save
 
# vzctl start CTID
 
# vzctl enter CTID
 
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
 
dd: writing `/tmp/tmp.file': Disk quota exceeded
 
</pre>
 
 
 
[[Category:Events‏‎]]
 

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)

Template used on this page: