Open main menu

OpenVZ Virtuozzo Containers Wiki β

Demo scripts

Revision as of 16:59, 30 October 2006 by Major (talk | contribs) (Massive VE load)

The following demo scripts (scenarios) can be used to show advantages of OpenVZ.

Contents

Full VE lifecycle

Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes ("compare that to a time you need to deploy a new (non-virtual) server!"). During the demonstration, describe what's happening and why.

Here are the example commands needed:

# VE=123
# IP=10.1.1.123
# sed -i "/$IP /d" ~/.ssh/
# time vzctl create $VE --ostemplate fedora-core-5-i386-default
# vzctl set $VE --ipadd $IP --hostname newVE --save
# vzctl start $VE
# vzctl exec $VE ps axf
# vzctl set $VE --userpasswd guest:secret --save
# ssh guest@$IP
[newVE]# ps axf
[newVE]# logout
# vzctl stop $VE
# vzctl destroy $VE

Massive VE creation

Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.

Here are the example commands needed:

# time for ((VE=200; VE<250; VE++)); do \
>  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \
>  vzctl start $VE; \
> done

Massive VE load

Use VEs from previous item — load those by ab or http_load. This demo shows that multiple VEs are working just fine, with low response time etc.

# for ((VE=200; VE<250; VE++)); do \
>  vzctl set $VE --ipadd 10.1.1.$VE --save; \
> done

On another machine:

# rpm -ihv http_load
# 

FIXME: http_load commands

Live migration

If you have two boxes, do "vzmigrate --online" from one box to another. You can use, say, xvnc in a VE and vncclient to connect to it, then run xscreensaver-demo and while the picture is moving do a live migration. You'll show xscreensaver stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.

FIXME: commands, setup, vnc template.

Resource management

Below scenarios aims to show how OpenVZ resource management works.

UBC protection

fork() bomb

# while [ true ]; do \
>     while [ true ]; do \
>         echo " " > /dev/null;
>     done &
> done

We can see that number of processes inside VE will not be grow up. We will see only increase of numproc or kmemsize fail counters in /proc/user_beancounters.

dentry cache eat up

FIXME

CPU scheduler

Create 3 VEs:

# vzctl create 101
# vzctl create 102
# vzctl create 103

Set VEs weights:

# vzctl set 101 --cpuunits 1000 --save
# vzctl set 102 --cpuunits 2000 --save
# vzctl set 103 --cpuunits 3000 --save

We set next cpu sharing VE101 : VE102 : VE103 = 1 : 2 : 3

Run VEs:

# vzctl start 101
# vzctl start 102
# vzctl start 103

Run busy loops in VEs:

# vzctl enter 101
[ve101]# while [ true ]; do true; done
# vzctl enter 102
[ve102]# while [ true ]; do true; done
# vzctl enter 103
[ve103]# while [ true ]; do true; done

Check in top that sharing works:

# top
COMMAND    %CPU
bash       48.0
bash       34.0
bash       17.5

So, we see that CPU time is given to VEs in proportion ~ 1 : 2 : 3.

Disk quota

# vzctl set VEID --diskspace 1048576:1153434 --save
# vzctl start VEID
# vzctl enter VEID
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000
dd: writing `/tmp/tmp.file': Disk quota exceeded