Difference between revisions of "Demo scripts"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(enlarged)
(added category)
 
(10 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Demo scripts which can be used to show advantages of OpenVZ:
+
The following demo scripts (scenarios) can be used to show advantages of OpenVZ.
  
== Full VE lifecycle ==
+
== Full container lifecycle ==
  
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes ("compare that to a time you need to deploy a new (non-virtual) server!"). During the demonstration, describe what's happening and why.
+
Create a container, set an IP, start, add a user, enter, exec, show
 +
<code>ps -axf</code> output inside the container, stop, and destroy.
 +
It should take about two minutes (''"compare that to a time you need
 +
to deploy a new (non-virtual) server!"''). During the demonstration,
 +
describe what's happening and why.
  
 
Here are the example commands needed:
 
Here are the example commands needed:
  
  # VE=123
+
  # CT=123
 
  # IP=10.1.1.123
 
  # IP=10.1.1.123
 
  # sed -i "/$IP /d" ~/.ssh/
 
  # sed -i "/$IP /d" ~/.ssh/
  # time vzctl create $VE --ostemplate fedora-core-5-i386-default
+
  # time vzctl create $CT --ostemplate fedora-core-5-i386-default
  # vzctl set $VE --ipadd $IP --hostname newVE --save
+
  # vzctl set $CT --ipadd $IP --hostname newCT --save
  # vzctl start $VE
+
  # vzctl start $CT
  # vzctl exec $VE ps axf
+
  # vzctl exec $CT ps axf
  # vzctl set $VE --userpasswd guest:secret --save
+
  # vzctl set $CT --userpasswd guest:secret --save
 
  # ssh guest@$IP
 
  # ssh guest@$IP
  [newVE]# ps axf
+
  [newCT]# ps axf
  [newVE]# logout
+
  [newCT]# logout
  # vzctl stop $VE
+
  # vzctl stop $CT
  # vzctl destroy $VE
+
  # vzctl destroy $CT
  
== Massive VE creation ==
+
== Massive container creation ==
  
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.
+
Create/start 50 or 100 containers in a shell loop. Shows fast deployment
 +
and high density.
  
 
Here are the example commands needed:
 
Here are the example commands needed:
  
# VE=200
+
<pre>
# time while [ $VE -lt 250 ]; do \
+
# time for ((CT=200; CT<250; CT++)); do \
>  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \
+
>  time vzctl create $CT --ostemplate fedora-core-9-i386; \
>  vzctl start $VE; \
+
>  vzctl start $CT; \
> let VE++; \
+
> done
> done
+
</pre>
  
== Massive VE load ==
+
== Massive container load ==
  
Use VEs from previous item — load those by <code>ab</code> or <code>http_load</code>. This demo shows that multiple VEs are working just fine, with low response time etc.
+
Use containers from the previous item — load those by <code>ab</code> or
 +
<code>http_load</code>. This demo shows that multiple containers are working
 +
just fine, with low response time etc.
  
FIXME: commands, ab/http_load setup.
+
<pre>
 +
# for ((CT=200; CT<250; CT++)); do \
 +
>  vzctl set $CT --ipadd 10.1.1.$CT --save; \
 +
> done
 +
</pre>
 +
 
 +
On another machine:
 +
 
 +
<pre>
 +
# rpm -ihv http_load
 +
#
 +
</pre>
 +
FIXME: http_load commands
  
 
== Live migration ==
 
== Live migration ==
  
If you have two boxes, do "<code>vzmigrate --online</code>" from one box to another. You can use, say, <code>xvnc</code> in a VE and <code>vncclient</code> to connect to it, then run <code>xscreensaver-demo</code> and while the picture is moving do a live migration. You'll show <code>xscreensaver</code> stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.
+
If you have two boxes, do <code>vzmigrate --online</code> from one box
 +
to another. You can use, say, <code>xvnc</code> in a container and
 +
<code>vncclient</code> to connect to it, then run
 +
<code>xscreensaver-demo</code>, choose a suitable screensaver (eye-candy but
 +
not too CPU aggressive) and while the picture is moving start a live
 +
migration. You'll see that <code>xscreensaver</code> stalls for a few
 +
seconds but then continues to run — on another machine! That looks amazing,
 +
to say at least.
 +
 
 +
FIXME: commands, setup, VNC template.
 +
 
 +
== Resource management ==
 +
Below scenarios aims to show how OpenVZ resource management works.
 +
 
 +
=== [[UBC]] protection ===
 +
 
 +
==== fork() bomb ====
 +
<pre>
 +
# while [ true ]; do \
 +
>    while [ true ]; do \
 +
>        echo " " > /dev/null;
 +
>    done &
 +
> done
 +
</pre>
 +
 
 +
We can see that the number of processes inside container will not be growing.
 +
We will see only the increase of <code>numproc</code> and/or
 +
<code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
 +
 
 +
==== dentry cache eat up ====
 +
FIXME
 +
 
 +
=== CPU scheduler ===
 +
 
 +
{{Warning|CPU weights only works in stable kernels.}}
 +
 
 +
Create 3 containers:
 +
<pre>
 +
# vzctl create 101
 +
# vzctl create 102
 +
# vzctl create 103
 +
</pre>
 +
 
 +
Set container weights:
 +
<pre>
 +
# vzctl set 101 --cpuunits 1000 --save
 +
# vzctl set 102 --cpuunits 2000 --save
 +
# vzctl set 103 --cpuunits 3000 --save
 +
</pre>
 +
 
 +
We set next CPU sharing <code>CT101 : CT102 : CT103 = 1 : 2 : 3</code>
 +
 
 +
Start containers:
 +
<pre>
 +
# vzctl start 101
 +
# vzctl start 102
 +
# vzctl start 103
 +
</pre>
 +
 
 +
Run busy loops in all containers:
 +
<pre>
 +
# vzctl enter 101
 +
[ve101]# while [ true ]; do true; done
 +
# vzctl enter 102
 +
[ve102]# while [ true ]; do true; done
 +
# vzctl enter 103
 +
[ve103]# while [ true ]; do true; done
 +
</pre>
 +
 
 +
Check in top that sharing works:
 +
<pre>
 +
# top
 +
COMMAND    %CPU
 +
bash      48.0
 +
bash      34.0
 +
bash      17.5
 +
</pre>
 +
 
 +
So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.
 +
 
 +
Now start some more busy loops. CPU distribution should remain the same.
 +
 
 +
=== Disk quota ===
 +
<pre>
 +
# vzctl set CTID --diskspace 1048576:1153434 --save
 +
# vzctl start CTID
 +
# vzctl enter CTID
 +
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
 +
dd: writing `/tmp/tmp.file': Disk quota exceeded
 +
</pre>
  
FIXME: commands, setup, vnc template.
+
[[Category:Events‏‎]]

Latest revision as of 06:32, 7 June 2015

The following demo scripts (scenarios) can be used to show advantages of OpenVZ.

Full container lifecycle[edit]

Create a container, set an IP, start, add a user, enter, exec, show ps -axf output inside the container, stop, and destroy. It should take about two minutes ("compare that to a time you need to deploy a new (non-virtual) server!"). During the demonstration, describe what's happening and why.

Here are the example commands needed:

# CT=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $CT --ostemplate fedora-core-5-i386-default
# vzctl set $CT --ipadd $IP --hostname newCT --save
# vzctl start $CT
# vzctl exec $CT ps axf
# vzctl set $CT --userpasswd guest:secret --save
# ssh guest@$IP
[newCT]# ps axf
[newCT]# logout
# vzctl stop $CT
# vzctl destroy $CT

Massive container creation[edit]

Create/start 50 or 100 containers in a shell loop. Shows fast deployment and high density.

Here are the example commands needed:

# time for ((CT=200; CT<250; CT++)); do \
>  time vzctl create $CT --ostemplate fedora-core-9-i386; \
>  vzctl start $CT; \
> done

Massive container load[edit]

Use containers from the previous item — load those by ab or http_load. This demo shows that multiple containers are working just fine, with low response time etc.

# for ((CT=200; CT<250; CT++)); do \
>  vzctl set $CT --ipadd 10.1.1.$CT --save; \
> done

On another machine:

# rpm -ihv http_load
# 

FIXME: http_load commands

Live migration[edit]

If you have two boxes, do vzmigrate --online from one box to another. You can use, say, xvnc in a container and vncclient to connect to it, then run xscreensaver-demo, choose a suitable screensaver (eye-candy but not too CPU aggressive) and while the picture is moving start a live migration. You'll see that xscreensaver stalls for a few seconds but then continues to run — on another machine! That looks amazing, to say at least.

FIXME: commands, setup, VNC template.

Resource management[edit]

Below scenarios aims to show how OpenVZ resource management works.

UBC protection[edit]

fork() bomb[edit]

# while [ true ]; do \
>     while [ true ]; do \
>         echo " " > /dev/null;
>     done &
> done

We can see that the number of processes inside container will not be growing. We will see only the increase of numproc and/or kmemsize fail counters in /proc/user_beancounters.

dentry cache eat up[edit]

FIXME

CPU scheduler[edit]

Warning.svg Warning: CPU weights only works in stable kernels.

Create 3 containers:

# vzctl create 101
# vzctl create 102
# vzctl create 103

Set container weights:

# vzctl set 101 --cpuunits 1000 --save
# vzctl set 102 --cpuunits 2000 --save
# vzctl set 103 --cpuunits 3000 --save

We set next CPU sharing CT101 : CT102 : CT103 = 1 : 2 : 3

Start containers:

# vzctl start 101
# vzctl start 102
# vzctl start 103

Run busy loops in all containers:

# vzctl enter 101
[ve101]# while [ true ]; do true; done
# vzctl enter 102
[ve102]# while [ true ]; do true; done
# vzctl enter 103
[ve103]# while [ true ]; do true; done

Check in top that sharing works:

# top
COMMAND    %CPU
bash       48.0
bash       34.0
bash       17.5

So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.

Now start some more busy loops. CPU distribution should remain the same.

Disk quota[edit]

# vzctl set CTID --diskspace 1048576:1153434 --save
# vzctl start CTID
# vzctl enter CTID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded