Changes

Jump to: navigation, search

Demo scripts

337 bytes added, 17:22, 6 February 2009
VE->CT, formatting, some corrections/additions
The following demo scripts (scenarios) can be used to show advantages of OpenVZ.
== Full VE container lifecycle ==
Create a VEcontainer, set an IP, start, add a user, enter, exec, show <code>ps -axf</code> output inside the VEcontainer, stop, and destroy. It should take about two minutes (''"compare that to a time you need to deploy a new (non-virtual) server!"''). During the demonstration, describe what's happening and why.
Here are the example commands needed:
# VECT=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $VE CT --ostemplate fedora-core-5-i386-default # vzctl set $VE CT --ipadd $IP --hostname newVE newCT --save # vzctl start $VECT # vzctl exec $VE CT ps axf # vzctl set $VE CT --userpasswd guest:secret --save
# ssh guest@$IP
[newVEnewCT]# ps axf [newVEnewCT]# logout # vzctl stop $VECT # vzctl destroy $VECT
== Massive VE container creation ==
Create/start 50 or 100 VEs containers in a shell loop. Shows fast deployment and high density.
Here are the example commands needed:
<pre>
# time for ((VECT=200; VECT<250; VECT++)); do \> time vzctl create $VE CT --ostemplate fedora-core-59-i386-default; \> vzctl start $VECT; \
> done
</pre>
== Massive VE container load ==
Use VEs containers from the previous item — load those by <code>ab</code> or <code>http_load</code>. This demo shows that multiple VEs containers are working just fine, with low response time etc.
<pre>
# for ((VECT=200; VECT<250; VECT++)); do \> vzctl set $VE CT --ipadd 10.1.1.$VE CT --save; \
> done
</pre>
 
On another machine:
 
<pre>
# rpm -ihv http_load
== Live migration ==
If you have two boxes, do "<code>vzmigrate --online</code>" from one box to another. You can use, say, <code>xvnc</code> in a VE container and <code>vncclient</code> to connect to it, then run <code>xscreensaver-demo</code> , choose a suitable screensaver (eye-candy butnot too CPU aggressive) and while the picture is moving do start a live migration. You'll show see that <code>xscreensaver</code> stalls for a few seconds but then keeps running continues to run — on another machine! That looks amazing, to say at least.
FIXME: commands, setup, vnc VNC template.
== Resource management ==
> done
</pre>
 We can see that the number of processes inside VE container will not be grow upgrowing. We will see only the increase of <code>numproc</code> and/or <code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
==== dentry cache eat up ====
=== CPU scheduler ===
 {{Warning|CPU weights only works in stable kernels.}} Create 3 VEscontainers:
<pre>
# vzctl create 101
</pre>
Set VEs container weights:
<pre>
# vzctl set 101 --cpuunits 1000 --save
</pre>
We set next cpu CPU sharing <code>VE101 CT101 : VE102 CT102 : VE103 CT103 = 1 : 2 : 3</code>
Run VEsStart containers:
<pre>
# vzctl start 101
</pre>
Run busy loops in VEsall containers:
<pre>
# vzctl enter 101
</pre>
So, we see that CPU time is given to VEs container in proportion ~ 1 : 2 : 3. Now start some more busy loops. CPU distribution should remain the same.
=== Disk quota ===
<pre>
# vzctl set VEID CTID --diskspace 1048576:1153434 --save# vzctl start VEIDCTID# vzctl enter VEIDCTID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded
</pre>

Navigation menu