Difference between revisions of "Demo scripts"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(formatting, articles Full VE lifecycle)
(added category)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 
The following demo scripts (scenarios) can be used to show advantages of OpenVZ.
 
The following demo scripts (scenarios) can be used to show advantages of OpenVZ.
  
== Full VE lifecycle ==
+
== Full container lifecycle ==
  
Create a VE, set an IP, start, add a user, enter, exec, show <code>ps -axf</code> output inside the VE, stop, and destroy. It should take two minutes ("compare that to a time you need to deploy a new (non-virtual) server!"). During the demonstration, describe what's happening and why.
+
Create a container, set an IP, start, add a user, enter, exec, show
 +
<code>ps -axf</code> output inside the container, stop, and destroy.
 +
It should take about two minutes (''"compare that to a time you need
 +
to deploy a new (non-virtual) server!"''). During the demonstration,
 +
describe what's happening and why.
  
 
Here are the example commands needed:
 
Here are the example commands needed:
  
  # VE=123
+
  # CT=123
 
  # IP=10.1.1.123
 
  # IP=10.1.1.123
 
  # sed -i "/$IP /d" ~/.ssh/
 
  # sed -i "/$IP /d" ~/.ssh/
  # time vzctl create $VE --ostemplate fedora-core-5-i386-default
+
  # time vzctl create $CT --ostemplate fedora-core-5-i386-default
  # vzctl set $VE --ipadd $IP --hostname newVE --save
+
  # vzctl set $CT --ipadd $IP --hostname newCT --save
  # vzctl start $VE
+
  # vzctl start $CT
  # vzctl exec $VE ps axf
+
  # vzctl exec $CT ps axf
  # vzctl set $VE --userpasswd guest:secret --save
+
  # vzctl set $CT --userpasswd guest:secret --save
 
  # ssh guest@$IP
 
  # ssh guest@$IP
  [newVE]# ps axf
+
  [newCT]# ps axf
  [newVE]# logout
+
  [newCT]# logout
  # vzctl stop $VE
+
  # vzctl stop $CT
  # vzctl destroy $VE
+
  # vzctl destroy $CT
  
== Massive VE creation ==
+
== Massive container creation ==
  
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.
+
Create/start 50 or 100 containers in a shell loop. Shows fast deployment
 +
and high density.
  
 
Here are the example commands needed:
 
Here are the example commands needed:
  
 
<pre>
 
<pre>
# time for ((VE=200; VE<250; VE++)); do \
+
# time for ((CT=200; CT<250; CT++)); do \
>  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \
+
>  time vzctl create $CT --ostemplate fedora-core-9-i386; \
>  vzctl start $VE; \
+
>  vzctl start $CT; \
 
> done
 
> done
 
</pre>
 
</pre>
  
== Massive VE load ==
+
== Massive container load ==
  
Use VEs from previous item — load those by <code>ab</code> or <code>http_load</code>. This demo shows that multiple VEs are working just fine, with low response time etc.
+
Use containers from the previous item — load those by <code>ab</code> or
 +
<code>http_load</code>. This demo shows that multiple containers are working
 +
just fine, with low response time etc.
  
 
<pre>
 
<pre>
# for ((VE=200; VE<250; VE++)); do \
+
# for ((CT=200; CT<250; CT++)); do \
>  vzctl set $VE --ipadd 10.1.1.$VE --save; \
+
>  vzctl set $CT --ipadd 10.1.1.$CT --save; \
 
> done
 
> done
 
</pre>
 
</pre>
 +
 
On another machine:
 
On another machine:
 +
 
<pre>
 
<pre>
 
# rpm -ihv http_load
 
# rpm -ihv http_load
Line 52: Line 61:
 
== Live migration ==
 
== Live migration ==
  
If you have two boxes, do "<code>vzmigrate --online</code>" from one box to another. You can use, say, <code>xvnc</code> in a VE and <code>vncclient</code> to connect to it, then run <code>xscreensaver-demo</code> and while the picture is moving do a live migration. You'll show <code>xscreensaver</code> stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.
+
If you have two boxes, do <code>vzmigrate --online</code> from one box
 +
to another. You can use, say, <code>xvnc</code> in a container and
 +
<code>vncclient</code> to connect to it, then run
 +
<code>xscreensaver-demo</code>, choose a suitable screensaver (eye-candy but
 +
not too CPU aggressive) and while the picture is moving start a live
 +
migration. You'll see that <code>xscreensaver</code> stalls for a few
 +
seconds but then continues to run — on another machine! That looks amazing,
 +
to say at least.
  
FIXME: commands, setup, vnc template.
+
FIXME: commands, setup, VNC template.
  
 
== Resource management ==
 
== Resource management ==
Line 69: Line 85:
 
> done
 
> done
 
</pre>
 
</pre>
We can see that number of processes inside VE will not be grow up. We will see only increase of <code>numproc</code> or <code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
+
 
 +
We can see that the number of processes inside container will not be growing.
 +
We will see only the increase of <code>numproc</code> and/or
 +
<code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
  
 
==== dentry cache eat up ====
 
==== dentry cache eat up ====
Line 75: Line 94:
  
 
=== CPU scheduler ===
 
=== CPU scheduler ===
Create 3 VEs:
+
 
 +
{{Warning|CPU weights only works in stable kernels.}}
 +
 
 +
Create 3 containers:
 
<pre>
 
<pre>
 
# vzctl create 101
 
# vzctl create 101
Line 82: Line 104:
 
</pre>
 
</pre>
  
Set VEs weights:
+
Set container weights:
 
<pre>
 
<pre>
 
# vzctl set 101 --cpuunits 1000 --save
 
# vzctl set 101 --cpuunits 1000 --save
Line 89: Line 111:
 
</pre>
 
</pre>
  
We set next cpu sharing <code>VE101 : VE102 : VE103 = 1 : 2 : 3</code>
+
We set next CPU sharing <code>CT101 : CT102 : CT103 = 1 : 2 : 3</code>
  
Run VEs:
+
Start containers:
 
<pre>
 
<pre>
 
# vzctl start 101
 
# vzctl start 101
Line 98: Line 120:
 
</pre>
 
</pre>
  
Run busy loops in VEs:
+
Run busy loops in all containers:
 
<pre>
 
<pre>
 
# vzctl enter 101
 
# vzctl enter 101
Line 117: Line 139:
 
</pre>
 
</pre>
  
So, we see that CPU time is given to VEs in proportion ~ 1 : 2 : 3.
+
So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.
 +
 
 +
Now start some more busy loops. CPU distribution should remain the same.
  
 
=== Disk quota ===
 
=== Disk quota ===
 
<pre>
 
<pre>
# vzctl set VEID --diskspace 1048576:1153434 --save
+
# vzctl set CTID --diskspace 1048576:1153434 --save
# vzctl start VEID
+
# vzctl start CTID
# vzctl enter VEID
+
# vzctl enter CTID
 
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
 
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
 
dd: writing `/tmp/tmp.file': Disk quota exceeded
 
dd: writing `/tmp/tmp.file': Disk quota exceeded
 
</pre>
 
</pre>
 +
 +
[[Category:Events‏‎]]

Latest revision as of 06:32, 7 June 2015

The following demo scripts (scenarios) can be used to show advantages of OpenVZ.

Full container lifecycle[edit]

Create a container, set an IP, start, add a user, enter, exec, show ps -axf output inside the container, stop, and destroy. It should take about two minutes ("compare that to a time you need to deploy a new (non-virtual) server!"). During the demonstration, describe what's happening and why.

Here are the example commands needed:

# CT=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $CT --ostemplate fedora-core-5-i386-default
# vzctl set $CT --ipadd $IP --hostname newCT --save
# vzctl start $CT
# vzctl exec $CT ps axf
# vzctl set $CT --userpasswd guest:secret --save
# ssh guest@$IP
[newCT]# ps axf
[newCT]# logout
# vzctl stop $CT
# vzctl destroy $CT

Massive container creation[edit]

Create/start 50 or 100 containers in a shell loop. Shows fast deployment and high density.

Here are the example commands needed:

# time for ((CT=200; CT<250; CT++)); do \
>  time vzctl create $CT --ostemplate fedora-core-9-i386; \
>  vzctl start $CT; \
> done

Massive container load[edit]

Use containers from the previous item — load those by ab or http_load. This demo shows that multiple containers are working just fine, with low response time etc.

# for ((CT=200; CT<250; CT++)); do \
>  vzctl set $CT --ipadd 10.1.1.$CT --save; \
> done

On another machine:

# rpm -ihv http_load
# 

FIXME: http_load commands

Live migration[edit]

If you have two boxes, do vzmigrate --online from one box to another. You can use, say, xvnc in a container and vncclient to connect to it, then run xscreensaver-demo, choose a suitable screensaver (eye-candy but not too CPU aggressive) and while the picture is moving start a live migration. You'll see that xscreensaver stalls for a few seconds but then continues to run — on another machine! That looks amazing, to say at least.

FIXME: commands, setup, VNC template.

Resource management[edit]

Below scenarios aims to show how OpenVZ resource management works.

UBC protection[edit]

fork() bomb[edit]

# while [ true ]; do \
>     while [ true ]; do \
>         echo " " > /dev/null;
>     done &
> done

We can see that the number of processes inside container will not be growing. We will see only the increase of numproc and/or kmemsize fail counters in /proc/user_beancounters.

dentry cache eat up[edit]

FIXME

CPU scheduler[edit]

Warning.svg Warning: CPU weights only works in stable kernels.

Create 3 containers:

# vzctl create 101
# vzctl create 102
# vzctl create 103

Set container weights:

# vzctl set 101 --cpuunits 1000 --save
# vzctl set 102 --cpuunits 2000 --save
# vzctl set 103 --cpuunits 3000 --save

We set next CPU sharing CT101 : CT102 : CT103 = 1 : 2 : 3

Start containers:

# vzctl start 101
# vzctl start 102
# vzctl start 103

Run busy loops in all containers:

# vzctl enter 101
[ve101]# while [ true ]; do true; done
# vzctl enter 102
[ve102]# while [ true ]; do true; done
# vzctl enter 103
[ve103]# while [ true ]; do true; done

Check in top that sharing works:

# top
COMMAND    %CPU
bash       48.0
bash       34.0
bash       17.5

So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.

Now start some more busy loops. CPU distribution should remain the same.

Disk quota[edit]

# vzctl set CTID --diskspace 1048576:1153434 --save
# vzctl start CTID
# vzctl enter CTID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded