Changes

Jump to: navigation, search

Demo scripts

1,701 bytes added, 06:32, 7 June 2015
added category
The following demo scripts (scenarios) can be used to show advantages of OpenVZ.
== Full VE container lifecycle ==
Create VEa container, set an IP, start, add a user, enter, exec, show <code>ps -axf </code> output inside VEthe container, stop, and destroy. It should take about two minutes (''"compare that to a time you need to deploy a new (non-virtual) server!"''). During the demonstration, describe what's happening and why.
Here are the example commands needed:
# VECT=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $VE CT --ostemplate fedora-core-5-i386-default # vzctl set $VE CT --ipadd $IP --hostname newVE newCT --save # vzctl start $VECT # vzctl exec $VE CT ps axf # vzctl set $VE CT --userpasswd guest:secret --save
# ssh guest@$IP
[newVEnewCT]# ps axf [newVEnewCT]# logout # vzctl stop $VECT # vzctl destroy $VECT
== Massive VE container creation ==
Create/start 50 or 100 VEs containers in a shell loop. Shows fast deployment and high density.
Here are the example commands needed:
<pre># VEtime for ((CT=200 # time while [ $VE -lt ; CT<250 ]; CT++)); do \ > time vzctl create $VE CT --ostemplate fedora-core-59-i386-default; \ > vzctl start $VECT; \ > let VE++; \done </pre> done
== Massive VE container load ==
Use VEs containers from the previous item — load those by <code>ab</code> or <code>http_load</code>. This demo shows that multiple VEs containers are working just fine, with low response time etc.
<pre># for ((CT=200; CT<250; CT++)); do \> vzctl set $CT --ipadd 10.1.1.$CT --save; \> done</pre> On another machine: <pre># rpm -ihv http_load# </pre>FIXME: http_load commands, ab/http_load setup.
== Live migration ==
If you have two boxes, do "<code>vzmigrate --online</code>" from one box to another. You can use, say, <code>xvnc</code> in a VE container and <code>vncclient</code> to connect to it, then run <code>xscreensaver-demo</code> , choose a suitable screensaver (eye-candy butnot too CPU aggressive) and while the picture is moving do start a live migration. You'll show see that <code>xscreensaver</code> stalls for a few seconds but then keeps running continues to run — on another machine! That looks amazing, to say at least.
FIXME: commands, setup, vnc VNC template.
== Resource management ==
> done
</pre>
 
We can see that the number of processes inside container will not be growing.
We will see only the increase of <code>numproc</code> and/or
<code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
==== dentry cache eat up ====
=== CPU scheduler ===
FIXME{{Warning|CPU weights only works in stable kernels.}} Create 3 containers:<pre># vzctl create 101# vzctl create 102# vzctl create 103</pre> Set container weights:<pre># vzctl set 101 --cpuunits 1000 --save# vzctl set 102 --cpuunits 2000 --save# vzctl set 103 --cpuunits 3000 --save</pre> We set next CPU sharing <code>CT101 : CT102 : CT103 = 1 : 2 : 3</code> Start containers:<pre># vzctl start 101# vzctl start 102# vzctl start 103</pre> Run busy loops in all containers:<pre># vzctl enter 101[ve101]# while [ true ]; do true; done# vzctl enter 102[ve102]# while [ true ]; do true; done# vzctl enter 103[ve103]# while [ true ]; do true; done</pre> Check in top that sharing works:<pre># topCOMMAND %CPUbash 48.0bash 34.0bash 17.5</pre> So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3. Now start some more busy loops. CPU distribution should remain the same.
=== Disk quota ===
FIXME<pre># vzctl set CTID --diskspace 1048576:1153434 --save# vzctl start CTID# vzctl enter CTID[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000dd: writing `/tmp/tmp.file': Disk quota exceeded</pre> [[Category:Events‏‎]]

Navigation menu