Demo scripts

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search

The following demo scripts (scenarios) can be used to show advantages of OpenVZ.

Full container lifecycle[edit]

Create a container, set an IP, start, add a user, enter, exec, show ps -axf output inside the container, stop, and destroy. It should take about two minutes ("compare that to a time you need to deploy a new (non-virtual) server!"). During the demonstration, describe what's happening and why.

Here are the example commands needed:

# CT=123
# IP=
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $CT --ostemplate fedora-core-5-i386-default
# vzctl set $CT --ipadd $IP --hostname newCT --save
# vzctl start $CT
# vzctl exec $CT ps axf
# vzctl set $CT --userpasswd guest:secret --save
# ssh guest@$IP
[newCT]# ps axf
[newCT]# logout
# vzctl stop $CT
# vzctl destroy $CT

Massive container creation[edit]

Create/start 50 or 100 containers in a shell loop. Shows fast deployment and high density.

Here are the example commands needed:

# time for ((CT=200; CT<250; CT++)); do \
>  time vzctl create $CT --ostemplate fedora-core-9-i386; \
>  vzctl start $CT; \
> done

Massive container load[edit]

Use containers from the previous item — load those by ab or http_load. This demo shows that multiple containers are working just fine, with low response time etc.

# for ((CT=200; CT<250; CT++)); do \
>  vzctl set $CT --ipadd 10.1.1.$CT --save; \
> done

On another machine:

# rpm -ihv http_load

FIXME: http_load commands

Live migration[edit]

If you have two boxes, do vzmigrate --online from one box to another. You can use, say, xvnc in a container and vncclient to connect to it, then run xscreensaver-demo, choose a suitable screensaver (eye-candy but not too CPU aggressive) and while the picture is moving start a live migration. You'll see that xscreensaver stalls for a few seconds but then continues to run — on another machine! That looks amazing, to say at least.

FIXME: commands, setup, VNC template.

Resource management[edit]

Below scenarios aims to show how OpenVZ resource management works.

UBC protection[edit]

fork() bomb[edit]

# while [ true ]; do \
>     while [ true ]; do \
>         echo " " > /dev/null;
>     done &
> done

We can see that the number of processes inside container will not be growing. We will see only the increase of numproc and/or kmemsize fail counters in /proc/user_beancounters.

dentry cache eat up[edit]


CPU scheduler[edit]

Warning.svg Warning: CPU weights only works in stable kernels.

Create 3 containers:

# vzctl create 101
# vzctl create 102
# vzctl create 103

Set container weights:

# vzctl set 101 --cpuunits 1000 --save
# vzctl set 102 --cpuunits 2000 --save
# vzctl set 103 --cpuunits 3000 --save

We set next CPU sharing CT101 : CT102 : CT103 = 1 : 2 : 3

Start containers:

# vzctl start 101
# vzctl start 102
# vzctl start 103

Run busy loops in all containers:

# vzctl enter 101
[ve101]# while [ true ]; do true; done
# vzctl enter 102
[ve102]# while [ true ]; do true; done
# vzctl enter 103
[ve103]# while [ true ]; do true; done

Check in top that sharing works:

# top
bash       48.0
bash       34.0
bash       17.5

So, we see that CPU time is given to container in proportion ~ 1 : 2 : 3.

Now start some more busy loops. CPU distribution should remain the same.

Disk quota[edit]

# vzctl set CTID --diskspace 1048576:1153434 --save
# vzctl start CTID
# vzctl enter CTID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded