1,734
edits
Changes
initial list of demo scripts
{{Virtuozzo}}
The following demo scripts (scenarios) can be used to show advantages of Virtuozzo.
== Full container lifecycle ==
Create a container, set an IP, start, add a user, enter, exec, show
<code>ps -axf</code> output inside the container, stop, and destroy.
It should take about two minutes (''"compare that to a time you need
to deploy a new (non-virtual) server!"''). During the demonstration,
describe what's happening and why.
Here are the example commands needed:
# CT=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $CT --ostemplate fedora-core-5-i386-default
# vzctl set $CT --ipadd $IP --hostname newCT --save
# vzctl start $CT
# vzctl exec $CT ps axf
# vzctl set $CT --userpasswd guest:secret --save
# ssh guest@$IP
[newCT]# ps axf
[newCT]# logout
# vzctl stop $CT
# vzctl destroy $CT
== Massive container creation ==
Create/start 50 or 100 containers in a shell loop. Shows fast deployment
and high density.
Here are the example commands needed:
<pre>
# time for ((CT=200; CT<250; CT++)); do \
> time vzctl create $CT --ostemplate fedora-core-9-i386; \
> vzctl start $CT; \
> done
</pre>
== Massive container load ==
Use containers from the previous item — load those by <code>ab</code> or
<code>http_load</code>. This demo shows that multiple containers are working
just fine, with low response time etc.
<pre>
# for ((CT=200; CT<250; CT++)); do \
> vzctl set $CT --ipadd 10.1.1.$CT --save; \
> done
</pre>
On another machine:
<pre>
# rpm -ihv http_load
#
</pre>
FIXME: http_load commands
== Live migration ==
If you have two boxes, do <code>vzmigrate --online</code> from one box
to another. You can use, say, <code>xvnc</code> in a container and
<code>vncclient</code> to connect to it, then run
<code>xscreensaver-demo</code>, choose a suitable screensaver (eye-candy but
not too CPU aggressive) and while the picture is moving start a live
migration. You'll see that <code>xscreensaver</code> stalls for a few
seconds but then continues to run — on another machine! That looks amazing,
to say at least.
FIXME: commands, setup, VNC template.
=== CRIU (Checkpoint and Restore In Userspace ===
* [https://github.com/tych0/presentations/blob/master/ods2014.md Migration of Doom inside container inside LXC container]
* [http://criu.org/Docker Checkpoint and Restore of Docker container]
* [https://github.com/jpetazzo/critmux CRIU + tmux]
* [http://criu.org/Simple_loop Simple loop]
* [http://criu.org/Asciinema CRIU screencasts]
== Resource management ==
Below scenarios aims to show how OpenVZ resource management works.
=== [[UBC]] protection ===
==== fork() bomb ====
<pre>
# while [ true ]; do \
> while [ true ]; do \
> echo " " > /dev/null;
> done &
> done
</pre>
We can see that the number of processes inside container will not be growing.
We will see only the increase of <code>numproc</code> and/or
<code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
==== dentry cache eat up ====
FIXME
=== CPU scheduler ===
{{Warning|CPU weights only works in stable kernels.}}
Create 3 containers:
<pre>
# vzctl create 101
# vzctl create 102
# vzctl create 103
</pre>
Set container weights:
<pre>
# vzctl set 101 --cpuunits 1000 --save
# vzctl set 102 --cpuunits 2000 --save
# vzctl set 103 --cpuunits 3000 --save
</pre>
We set next CPU sharing <code>CT101 : CT102 : CT103 = 1 : 2 : 3</code>
Start containers:
<pre>
# vzctl start 101
# vzctl start 102
# vzctl start 103
</pre>
Run busy loops in all containers:
<pre>
# vzctl enter 101
[ve101]# while [ true ]; do true; done
# vzctl enter 102
[ve102]# while [ true ]; do true; done
# vzctl enter 103
[ve103]# while [ true ]; do true; done
</pre>
Check in top that sharing works:
<pre>
# top
COMMAND %CPU
bash 48.0
bash 34.0
bash 17.5
</pre>
So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.
Now start some more busy loops. CPU distribution should remain the same.
=== Disk quota ===
<pre>
# vzctl set CTID --diskspace 1048576:1153434 --save
# vzctl start CTID
# vzctl enter CTID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded
</pre>
[[Category:Events]]
The following demo scripts (scenarios) can be used to show advantages of Virtuozzo.
== Full container lifecycle ==
Create a container, set an IP, start, add a user, enter, exec, show
<code>ps -axf</code> output inside the container, stop, and destroy.
It should take about two minutes (''"compare that to a time you need
to deploy a new (non-virtual) server!"''). During the demonstration,
describe what's happening and why.
Here are the example commands needed:
# CT=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $CT --ostemplate fedora-core-5-i386-default
# vzctl set $CT --ipadd $IP --hostname newCT --save
# vzctl start $CT
# vzctl exec $CT ps axf
# vzctl set $CT --userpasswd guest:secret --save
# ssh guest@$IP
[newCT]# ps axf
[newCT]# logout
# vzctl stop $CT
# vzctl destroy $CT
== Massive container creation ==
Create/start 50 or 100 containers in a shell loop. Shows fast deployment
and high density.
Here are the example commands needed:
<pre>
# time for ((CT=200; CT<250; CT++)); do \
> time vzctl create $CT --ostemplate fedora-core-9-i386; \
> vzctl start $CT; \
> done
</pre>
== Massive container load ==
Use containers from the previous item — load those by <code>ab</code> or
<code>http_load</code>. This demo shows that multiple containers are working
just fine, with low response time etc.
<pre>
# for ((CT=200; CT<250; CT++)); do \
> vzctl set $CT --ipadd 10.1.1.$CT --save; \
> done
</pre>
On another machine:
<pre>
# rpm -ihv http_load
#
</pre>
FIXME: http_load commands
== Live migration ==
If you have two boxes, do <code>vzmigrate --online</code> from one box
to another. You can use, say, <code>xvnc</code> in a container and
<code>vncclient</code> to connect to it, then run
<code>xscreensaver-demo</code>, choose a suitable screensaver (eye-candy but
not too CPU aggressive) and while the picture is moving start a live
migration. You'll see that <code>xscreensaver</code> stalls for a few
seconds but then continues to run — on another machine! That looks amazing,
to say at least.
FIXME: commands, setup, VNC template.
=== CRIU (Checkpoint and Restore In Userspace ===
* [https://github.com/tych0/presentations/blob/master/ods2014.md Migration of Doom inside container inside LXC container]
* [http://criu.org/Docker Checkpoint and Restore of Docker container]
* [https://github.com/jpetazzo/critmux CRIU + tmux]
* [http://criu.org/Simple_loop Simple loop]
* [http://criu.org/Asciinema CRIU screencasts]
== Resource management ==
Below scenarios aims to show how OpenVZ resource management works.
=== [[UBC]] protection ===
==== fork() bomb ====
<pre>
# while [ true ]; do \
> while [ true ]; do \
> echo " " > /dev/null;
> done &
> done
</pre>
We can see that the number of processes inside container will not be growing.
We will see only the increase of <code>numproc</code> and/or
<code>kmemsize</code> fail counters in <code>/proc/user_beancounters</code>.
==== dentry cache eat up ====
FIXME
=== CPU scheduler ===
{{Warning|CPU weights only works in stable kernels.}}
Create 3 containers:
<pre>
# vzctl create 101
# vzctl create 102
# vzctl create 103
</pre>
Set container weights:
<pre>
# vzctl set 101 --cpuunits 1000 --save
# vzctl set 102 --cpuunits 2000 --save
# vzctl set 103 --cpuunits 3000 --save
</pre>
We set next CPU sharing <code>CT101 : CT102 : CT103 = 1 : 2 : 3</code>
Start containers:
<pre>
# vzctl start 101
# vzctl start 102
# vzctl start 103
</pre>
Run busy loops in all containers:
<pre>
# vzctl enter 101
[ve101]# while [ true ]; do true; done
# vzctl enter 102
[ve102]# while [ true ]; do true; done
# vzctl enter 103
[ve103]# while [ true ]; do true; done
</pre>
Check in top that sharing works:
<pre>
# top
COMMAND %CPU
bash 48.0
bash 34.0
bash 17.5
</pre>
So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.
Now start some more busy loops. CPU distribution should remain the same.
=== Disk quota ===
<pre>
# vzctl set CTID --diskspace 1048576:1153434 --save
# vzctl start CTID
# vzctl enter CTID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded
</pre>
[[Category:Events]]