Difference between revisions of "Getting started with OpenVZ live CD"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (Stopping/removing VE: bug #455 was fixed, so do not mention about this problem)
(categorized)
 
(11 intermediate revisions by 4 users not shown)
Line 1: Line 1:
This article is written for OpenVZ LiveCD and assumes that the reader only starts using OpenVZ.
+
This article is written for OpenVZ LiveCD and assumes that the reader only starts using OpenVZ. ([[Download live CD]])
  
 
== Introduction ==
 
== Introduction ==
 
So, as you probably know, OpenVZ allows the user to create [[VE]]s, or Virtual Environments, which seem very much
 
So, as you probably know, OpenVZ allows the user to create [[VE]]s, or Virtual Environments, which seem very much
 
like real computers. Real computer can run various distributions: Debian, Gentoo, Red Hat and  Novell products, etc.
 
like real computers. Real computer can run various distributions: Debian, Gentoo, Red Hat and  Novell products, etc.
In the same way, a VE can be based on various [[OS template|OS (Operating System) templates]]. On the LiveCD only few minimal OS templates are installed because of disk space limit. Each VE is indentified by its number -- a '''VEID'''.
+
In the same way, a VE can be based on various [[OS template|OS (Operating System) templates]]. On the LiveCD only few minimal OS templates are installed because of disk space limit. Each VE is identified by its number -- a '''VEID'''.
  
 
== VE creation ==
 
== VE creation ==
Line 10: Line 10:
 
terminal (you must be root):
 
terminal (you must be root):
 
<pre>
 
<pre>
root@Knoppix:~# vzctl create 101 --ostemplate debian-3.1-i386-minimal
+
# vzctl create 101 --ostemplate debian-3.1-i386-minimal
 
Creating VE private area (debian-3.1-i386-minimal)
 
Creating VE private area (debian-3.1-i386-minimal)
 
Performing postcreate actions
 
Performing postcreate actions
Line 16: Line 16:
 
</pre>
 
</pre>
  
'''vzctl''' is the tool that manages VEs. Look in <tt>/var/lib/vz/template/cache/</tt> directory for other OS templates available on LiveCD:
+
'''vzctl''' is the tool that manages VEs. Look in <tt>/vz/template/cache/</tt> (CentOS LiveCD)
 +
or in <tt>/var/lib/vz/template/cache/</tt> (KNOPPIX LiveCD) directories for other OS templates available on LiveCD:
 
<pre>
 
<pre>
root@Knoppix:~# ls -1 /var/lib/vz/template/cache/
+
# ls -1 /var/lib/vz/template/cache/
 
centos-4-i386-minimal.tar.gz
 
centos-4-i386-minimal.tar.gz
 
debian-3.1-i386-minimal.tar.gz
 
debian-3.1-i386-minimal.tar.gz
Line 27: Line 28:
 
You can get the list of all created VEs on '''HN''' (Hardware Node) using '''vzlist''' command:
 
You can get the list of all created VEs on '''HN''' (Hardware Node) using '''vzlist''' command:
 
<pre>
 
<pre>
root@Knoppix:~#  vzlist -a
+
#  vzlist -a
 
       VEID      NPROC STATUS  IP_ADDR        HOSTNAME
 
       VEID      NPROC STATUS  IP_ADDR        HOSTNAME
 
       101          - stopped -              -
 
       101          - stopped -              -
Line 37: Line 38:
 
Let's start it:
 
Let's start it:
 
<pre>
 
<pre>
root@Knoppix:~# vzctl start 101
+
# vzctl start 101
 
Starting VE ...
 
Starting VE ...
 
VE is mounted
 
VE is mounted
 
Setting CPU units: 1000
 
Setting CPU units: 1000
 
VE start in progress...
 
VE start in progress...
root@Knoppix:~# vzlist -a
+
# vzlist -a
 
       VEID      NPROC STATUS  IP_ADDR        HOSTNAME
 
       VEID      NPROC STATUS  IP_ADDR        HOSTNAME
 
       101          5 running                -
 
       101          5 running                -
Line 48: Line 49:
  
 
== Executing commands in VE ==
 
== Executing commands in VE ==
From the previous command you see that 5 processes are running inside VE 101. Being on usual [[hardware node]] you can use <code>ps</code> command to identify those, and the same command can be used here. The only difference is that this command should be called inside VE.
+
From the "vzlist" command you see that 5 processes are running inside VE 101. (The "NPROC" field indicates the number of Processes, or PIDs, that are active in the VE -- not the number of Processors, or CPUs.)  Being on usual [[hardware node]] you can use <code>ps</code> command to identify those, and the same command can be used here. The only difference is that this command should be called inside VE.
  
 
In order to perform any command inside VE `vzctl exec` is used:
 
In order to perform any command inside VE `vzctl exec` is used:
 
<pre>
 
<pre>
root@Knoppix:~# vzctl exec 101 ps
+
# vzctl exec 101 ps
 
   PID TTY          TIME CMD
 
   PID TTY          TIME CMD
 
     1 ?        00:00:00 init
 
     1 ?        00:00:00 init
Line 65: Line 66:
 
Any self-respected OS provides a shell for the user. This is how you can get the VE's shell:
 
Any self-respected OS provides a shell for the user. This is how you can get the VE's shell:
 
<pre>
 
<pre>
root@Knoppix:~# vzctl enter 101
+
# vzctl enter 101
 
entered into VE 101
 
entered into VE 101
Knoppix:/#
+
#
 
</pre>
 
</pre>
  
 
In this shell you can do almost all you can do on the real HN. For example create a new user:
 
In this shell you can do almost all you can do on the real HN. For example create a new user:
 
<pre>
 
<pre>
Knoppix:/# useradd new-user
+
# useradd new-user
Knoppix:/# passwd new-user
+
# passwd new-user
 
Enter new UNIX password:
 
Enter new UNIX password:
 
Retype new UNIX password:
 
Retype new UNIX password:
 
passwd: password updated successfully
 
passwd: password updated successfully
Knoppix:/# mkdir /home/new-user
+
# mkdir /home/new-user
Knoppix:/# chown new-user /home/new-user/
+
# chown new-user /home/new-user/
Knoppix:/# su new-user
+
# su new-user
Knoppix:/$ cd ~
+
$ cd ~
Knoppix:~$ pwd
+
$ pwd
 
/home/new-user
 
/home/new-user
 
exit
 
exit
Knoppix:/#
+
#
 
</pre>
 
</pre>
  
 
In order to exit from VEs shell, just type exit:
 
In order to exit from VEs shell, just type exit:
 
<pre>
 
<pre>
Knoppix:/# exit
+
# exit
 
logout
 
logout
 
exited from VE 101
 
exited from VE 101
root@Knoppix:~#
+
#
 
</pre>
 
</pre>
  
Line 99: Line 100:
  
 
<pre>
 
<pre>
root@Knoppix:~# echo 1 > /proc/sys/net/ipv4/ip_forward
+
# echo 1 > /proc/sys/net/ipv4/ip_forward
root@Knoppix:~# ifconfig venet0 up
+
# ifconfig venet0 up
root@Knoppix:~# vzctl set 101 --ipadd 10.1.1.1 --save
+
# vzctl set 101 --ipadd 10.1.1.1 --save
 
Adding IP address(es): 10.1.1.1
 
Adding IP address(es): 10.1.1.1
 
Saved parameters for VE 1
 
Saved parameters for VE 1
root@Knoppix:~# vzlist -a
+
# vzlist -a
 
       VEID      NPROC STATUS  IP_ADDR        HOSTNAME
 
       VEID      NPROC STATUS  IP_ADDR        HOSTNAME
 
       101          4 running 10.1.1.1        -
 
       101          4 running 10.1.1.1        -
Line 111: Line 112:
 
Now your [[Hardware Node]] can ping VE and VE can ping HN:
 
Now your [[Hardware Node]] can ping VE and VE can ping HN:
 
<pre>
 
<pre>
root@Knoppix:~# ping 10.1.1.1
+
# ping 10.1.1.1
 
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
 
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
 
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=3.80 ms
 
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=3.80 ms
Line 118: Line 119:
 
1 packets transmitted, 1 received, 0% packet loss, time 0ms
 
1 packets transmitted, 1 received, 0% packet loss, time 0ms
 
rtt min/avg/max/mdev = 3.804/3.804/3.804/0.000 ms
 
rtt min/avg/max/mdev = 3.804/3.804/3.804/0.000 ms
root@Knoppix:~#
+
#
root@Knoppix:~# vzctl exec 101 ping 192.168.0.244
+
# vzctl exec 101 ping 192.168.0.244
 
PING 192.168.0.244 (192.168.0.244) 56(84) bytes of data.
 
PING 192.168.0.244 (192.168.0.244) 56(84) bytes of data.
 
64 bytes from 192.168.0.244: icmp_seq=1 ttl=64 time=0.508 ms
 
64 bytes from 192.168.0.244: icmp_seq=1 ttl=64 time=0.508 ms
  
root@Knoppix:~#
+
#
 
</pre>
 
</pre>
  
Line 132: Line 133:
 
of your node is 192.168.0.244 and nameserver IP address is 192.168.1.1.
 
of your node is 192.168.0.244 and nameserver IP address is 192.168.1.1.
 
<pre>
 
<pre>
root@Knoppix:~# iptables -t nat -A POSTROUTING -s 10.1.1.1 -o eth0 -j SNAT --to 192.168.0.244
+
# iptables -t nat -A POSTROUTING -s 10.1.1.1 -o eth0 -j SNAT --to 192.168.0.244
root@Knoppix:~# vzctl set 101 --nameserver 192.168.1.1 --save
+
# vzctl set 101 --nameserver 192.168.1.1 --save
 
File resolv.conf was modified
 
File resolv.conf was modified
 
Saved parameters for VE 101
 
Saved parameters for VE 101
root@Knoppix:~# vzctl exec 101 ping google.com
+
# vzctl exec 101 ping google.com
 
PING google.com (64.233.167.99) 56(84) bytes of data.
 
PING google.com (64.233.167.99) 56(84) bytes of data.
 
64 bytes from py-in-f99.google.com (64.233.167.99): icmp_seq=1 ttl=241 time=23.0 ms
 
64 bytes from py-in-f99.google.com (64.233.167.99): icmp_seq=1 ttl=241 time=23.0 ms
Line 143: Line 144:
 
== Installing software inside VE ==
 
== Installing software inside VE ==
 
I guess you've noted that there is not so many packages in VE. It is because minimal template was used.
 
I guess you've noted that there is not so many packages in VE. It is because minimal template was used.
But of course, you can install any software in VE by yourself. For example, in Debian usual apt-get tool can be used.
+
But of course, you can install any software in VE by yourself. For example, in Debian usual <tt>apt-get</tt> tool can be used.
  
 
Now, for example, we can install gcc inside VE 101 for developing purposes:
 
Now, for example, we can install gcc inside VE 101 for developing purposes:
 
<pre>
 
<pre>
root@Knoppix:~# vzctl enter 101
+
# vzctl enter 101
 
entered into VE 101
 
entered into VE 101
Knoppix:/#
+
#
Knoppix:/# apt-get install gcc
+
# apt-get install gcc
 
Reading Package Lists... Done
 
Reading Package Lists... Done
 
Building Dependency Tree... Done
 
Building Dependency Tree... Done
Line 189: Line 190:
 
Setting up gcc (3.3.5-3) ...
 
Setting up gcc (3.3.5-3) ...
  
Knoppix:/# exit
+
# exit
 
logout
 
logout
 
exited from VE 101
 
exited from VE 101
root@Knoppix:~#
+
#
 
</pre>
 
</pre>
 +
 +
{{ Note|In the LiveCD environment, you may have to increase '''shmpages''' resource limit/barrier for the VE (read the next section) or you will run out of "disk space" when trying to install software }}
  
 
== Resource limiting ==
 
== Resource limiting ==
 
The very important feature of VE is that you can limit it by resources: CPU, memory, disk space.
 
The very important feature of VE is that you can limit it by resources: CPU, memory, disk space.
It is also performed via vzctl. Current usage values and limits of memory-related resources can be viewed through
+
It is also performed via vzctl. For example to set '''shmpages''' (shared memory pages) barrier:limit you
 +
should give this command:
 +
<pre>
 +
vzctl set 101 --shmpages 16384:16384 --save
 +
</pre>
 +
This will give VE 101 64MB of '''shmpages''' (one page equals 4Kb on i386: 4Kb * 16384 = 64Mb)
 +
 
 +
Current usage values and limits of memory-related resources can be viewed through
 
<code>/proc/bc/VEID/resources</code> file:
 
<code>/proc/bc/VEID/resources</code> file:
 
<pre>
 
<pre>
root@Knoppix:~# cat /proc/bc/101/resources
+
# cat /proc/bc/101/resources # or /proc/user_beancounters on 2.6.9 kernels
 
             kmemsize        628209    976969    2752512    2936012          0
 
             kmemsize        628209    976969    2752512    2936012          0
 
             lockedpages          0          0        32        32          0
 
             lockedpages          0          0        32        32          0
Line 221: Line 231:
 
             numfile            106        339      2048      2048          0
 
             numfile            106        339      2048      2048          0
 
             numiptent            10        10        128        128          0
 
             numiptent            10        10        128        128          0
root@Knoppix:~#
+
#
 
</pre>
 
</pre>
 
First column is resource name, second is current usage, third is peak usage, forth and fifth are barrier and limit, and last column is fail counter.
 
First column is resource name, second is current usage, third is peak usage, forth and fifth are barrier and limit, and last column is fail counter.
Line 233: Line 243:
 
Well, let's stop VE and destroy it:
 
Well, let's stop VE and destroy it:
 
<pre>
 
<pre>
root@Knoppix:~# vzctl stop 101
+
# vzctl stop 101
 
Stopping VE ...
 
Stopping VE ...
 
VE was stopped
 
VE was stopped
 
VE is unmounted
 
VE is unmounted
root@Knoppix:~# vzctl destroy 101
+
# vzctl destroy 101
 
Destroying VE private area: /var/lib/vz/private/101
 
Destroying VE private area: /var/lib/vz/private/101
 
VE private area was destroyed
 
VE private area was destroyed
root@Knoppix:~#
+
#
 
</pre>
 
</pre>
  
 
== Links ==
 
== Links ==
That's all you need to start playing with OpenVZ. Additional information can be found in man page on vzctl and at http://wiki.openvz.org/.
+
That's all you need to start playing with OpenVZ. Additional information can be found in man page on vzctl and at http://wiki.openvz.org/ .
 +
 
 +
If you experience some difficulties, contact us via http://forum.openvz.org/ . Templates and other tools are available from http://download.openvz.org/ .
  
If you expirience some difficulties, contact us via http://forum.openvz.org/. Templates and other tools are available from http://download.openvz.org/.
+
[[Category: Live CD]]

Latest revision as of 13:16, 11 December 2007

This article is written for OpenVZ LiveCD and assumes that the reader only starts using OpenVZ. (Download live CD)

Introduction[edit]

So, as you probably know, OpenVZ allows the user to create VEs, or Virtual Environments, which seem very much like real computers. Real computer can run various distributions: Debian, Gentoo, Red Hat and Novell products, etc. In the same way, a VE can be based on various OS (Operating System) templates. On the LiveCD only few minimal OS templates are installed because of disk space limit. Each VE is identified by its number -- a VEID.

VE creation[edit]

So, how to create a VE with VEID of 101 based on Debian template? Very easy. Just type the following commands in your terminal (you must be root):

# vzctl create 101 --ostemplate debian-3.1-i386-minimal
Creating VE private area (debian-3.1-i386-minimal)
Performing postcreate actions
VE private area was created

vzctl is the tool that manages VEs. Look in /vz/template/cache/ (CentOS LiveCD) or in /var/lib/vz/template/cache/ (KNOPPIX LiveCD) directories for other OS templates available on LiveCD:

# ls -1 /var/lib/vz/template/cache/
centos-4-i386-minimal.tar.gz
debian-3.1-i386-minimal.tar.gz
fedora-core-5-i386-minimal.tar.gz

List of VEs[edit]

You can get the list of all created VEs on HN (Hardware Node) using vzlist command:

#  vzlist -a
      VEID      NPROC STATUS  IP_ADDR         HOSTNAME
       101          - stopped -               -

As you see, VE 101 is in stopped state now.

Starting VE[edit]

Let's start it:

# vzctl start 101
Starting VE ...
VE is mounted
Setting CPU units: 1000
VE start in progress...
# vzlist -a
      VEID      NPROC STATUS  IP_ADDR         HOSTNAME
       101          5 running                 -

Executing commands in VE[edit]

From the "vzlist" command you see that 5 processes are running inside VE 101. (The "NPROC" field indicates the number of Processes, or PIDs, that are active in the VE -- not the number of Processors, or CPUs.) Being on usual hardware node you can use ps command to identify those, and the same command can be used here. The only difference is that this command should be called inside VE.

In order to perform any command inside VE `vzctl exec` is used:

# vzctl exec 101 ps
  PID TTY          TIME CMD
    1 ?        00:00:00 init
 7672 ?        00:00:00 rc
 7674 ?        00:00:00 S10sysklogd
 7677 ?        00:00:00 syslogd
 7678 ?        00:00:00 syslogd
 7683 ?        00:00:00 ps

Entering VE[edit]

Any self-respected OS provides a shell for the user. This is how you can get the VE's shell:

# vzctl enter 101
entered into VE 101
#

In this shell you can do almost all you can do on the real HN. For example create a new user:

# useradd new-user
# passwd new-user
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
# mkdir /home/new-user
# chown new-user /home/new-user/
# su new-user
$ cd ~
$ pwd
/home/new-user
exit
#

In order to exit from VEs shell, just type exit:

# exit
logout
exited from VE 101
#

Setting up VE networking[edit]

Let's set up networking in VE.

# echo 1 > /proc/sys/net/ipv4/ip_forward
# ifconfig venet0 up
# vzctl set 101 --ipadd 10.1.1.1 --save
Adding IP address(es): 10.1.1.1
Saved parameters for VE 1
# vzlist -a
      VEID      NPROC STATUS  IP_ADDR         HOSTNAME
       101          4 running 10.1.1.1        -

Now your Hardware Node can ping VE and VE can ping HN:

# ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=3.80 ms

--- 10.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.804/3.804/3.804/0.000 ms
#
# vzctl exec 101 ping 192.168.0.244
PING 192.168.0.244 (192.168.0.244) 56(84) bytes of data.
64 bytes from 192.168.0.244: icmp_seq=1 ttl=64 time=0.508 ms

#

However, it is not possible to ping other computers in the network: for it we need to set up NAT (Network Address Translation) and set the nameserver.

Assume that you've set up network on HN (for example via DHCP) and the IP address of your node is 192.168.0.244 and nameserver IP address is 192.168.1.1.

# iptables -t nat -A POSTROUTING -s 10.1.1.1 -o eth0 -j SNAT --to 192.168.0.244
# vzctl set 101 --nameserver 192.168.1.1 --save
File resolv.conf was modified
Saved parameters for VE 101
# vzctl exec 101 ping google.com
PING google.com (64.233.167.99) 56(84) bytes of data.
64 bytes from py-in-f99.google.com (64.233.167.99): icmp_seq=1 ttl=241 time=23.0 ms

Installing software inside VE[edit]

I guess you've noted that there is not so many packages in VE. It is because minimal template was used. But of course, you can install any software in VE by yourself. For example, in Debian usual apt-get tool can be used.

Now, for example, we can install gcc inside VE 101 for developing purposes:

# vzctl enter 101
entered into VE 101
#
# apt-get install gcc
Reading Package Lists... Done
Building Dependency Tree... Done
The following extra packages will be installed:
  binutils cpp cpp-3.3 gcc-3.3
Suggested packages:
  binutils-doc cpp-doc make manpages-dev autoconf automake libtool flex bison gdb gcc-doc gcc-3.3-doc
Recommended packages:
  libc-dev libc6-dev
The following NEW packages will be installed:
  binutils cpp cpp-3.3 gcc gcc-3.3
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 5220kB of archives.
After unpacking 13.6MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://ftp.freenet.de stable/main binutils 2.15-6 [2221kB]
Get:2 http://ftp.freenet.de stable/main cpp-3.3 1:3.3.5-13 [1393kB]
Get:3 http://ftp.freenet.de stable/main cpp 4:3.3.5-3 [29.6kB]
Get:4 http://ftp.freenet.de stable/main gcc-3.3 1:3.3.5-13 [1570kB]
Get:5 http://ftp.freenet.de stable/main gcc 4:3.3.5-3 [4906B]
Fetched 5220kB in 10s (507kB/s)
Selecting previously deselected package binutils.
(Reading database ... 7436 files and directories currently installed.)
Unpacking binutils (from .../binutils_2.15-6_i386.deb) ...
Selecting previously deselected package cpp-3.3.
Unpacking cpp-3.3 (from .../cpp-3.3_1%3a3.3.5-13_i386.deb) ...
Selecting previously deselected package cpp.
Unpacking cpp (from .../cpp_4%3a3.3.5-3_i386.deb) ...
Selecting previously deselected package gcc-3.3.
Unpacking gcc-3.3 (from .../gcc-3.3_1%3a3.3.5-13_i386.deb) ...
Selecting previously deselected package gcc.
Unpacking gcc (from .../gcc_4%3a3.3.5-3_i386.deb) ...
Setting up binutils (2.15-6) ...

Setting up cpp-3.3 (3.3.5-13) ...
Setting up cpp (3.3.5-3) ...
Setting up gcc-3.3 (3.3.5-13) ...
Setting up gcc (3.3.5-3) ...

# exit
logout
exited from VE 101
#
Yellowpin.svg Note: In the LiveCD environment, you may have to increase shmpages resource limit/barrier for the VE (read the next section) or you will run out of "disk space" when trying to install software

Resource limiting[edit]

The very important feature of VE is that you can limit it by resources: CPU, memory, disk space. It is also performed via vzctl. For example to set shmpages (shared memory pages) barrier:limit you should give this command:

vzctl set 101 --shmpages 16384:16384 --save

This will give VE 101 64MB of shmpages (one page equals 4Kb on i386: 4Kb * 16384 = 64Mb)

Current usage values and limits of memory-related resources can be viewed through /proc/bc/VEID/resources file:

# cat /proc/bc/101/resources # or /proc/user_beancounters on 2.6.9 kernels
            kmemsize         628209     976969    2752512    2936012          0
            lockedpages           0          0         32         32          0
            privvmpages        5238       6885      49152      53575          0
            shmpages           5012       5014       8192       8192          0
            numproc               3         11         65         65          0
            physpages          5084       6020          0 2147483647          0
            vmguarpages           0          0       6144 2147483647          0
            oomguarpages       5084       6020       6144 2147483647          0
            numtcpsock            0          2         80         80          0
            numflock              1          5        100        110          0
            numpty                0          1         16         16          0
            numsiginfo            0          6        256        256          0
            tcpsndbuf             0       4440     319488     524288          0
            tcprcvbuf             0      42180     319488     524288          0
            othersockbuf       2220       6660     132096     336896          0
            dgramrcvbuf           0       2220     132096     132096          0
            numothersock          1          6         80         80          0
            dcachesize            0          0    1048576    1097728          0
            numfile             106        339       2048       2048          0
            numiptent            10         10        128        128          0
#

First column is resource name, second is current usage, third is peak usage, forth and fifth are barrier and limit, and last column is fail counter.

Note that if you have nonzero values in the last column, it means that this VE experienced a resource shortage. This is very common reason why some application fail to work in a VE. In this case you should increase limits/barriers accordingly; see resource shortage for more info.

Stopping/removing VE[edit]

Well, let's stop VE and destroy it:

# vzctl stop 101
Stopping VE ...
VE was stopped
VE is unmounted
# vzctl destroy 101
Destroying VE private area: /var/lib/vz/private/101
VE private area was destroyed
#

Links[edit]

That's all you need to start playing with OpenVZ. Additional information can be found in man page on vzctl and at http://wiki.openvz.org/ .

If you experience some difficulties, contact us via http://forum.openvz.org/ . Templates and other tools are available from http://download.openvz.org/ .