Difference between revisions of "Migration from Linux-VServer to OpenVZ"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (VE->container)
(Issues)
 
(16 intermediate revisions by 8 users not shown)
Line 1: Line 1:
Current document describes the migration from Linux-VServer based virtualization solution to OpenVZ.
+
{{Roughstub}}
  
Description of challenge:
+
This article describes the migration from Linux-VServer to OpenVZ.
  
The challenge is migration from Linux-Vserver to OpenVZ by booting the OpenVZ kernel and updating the existing configs of
+
== Details of migration process ==
utility level in purpose to make the existing guest OSes work over OpenVZ kernel.
 
  
Details of migration process. Step by step:
+
=== Initial conditions ===
  
1. Initial conditions: the following example of Linux-VServer based solution was used for the experiment:
+
The following example of Linux-VServer based solution was used for the experiment:
  
- Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;
+
* Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;
- Util-vserver-0.30.211 tools were used for creating containers;
+
* Util-vserver-0.30.211 tools were used for creating containers;
  
<code>
 
 
   # vserver-info
 
   # vserver-info
 
   Versions:
 
   Versions:
Line 19: Line 17:
 
   VS-API: 0x00020002
 
   VS-API: 0x00020002
 
   util-vserver: 0.30.211; Dec  5 2006, 17:10:21
 
   util-vserver: 0.30.211; Dec  5 2006, 17:10:21
 
+
 
   Features:
 
   Features:
 
   CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
 
   CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
Line 34: Line 32:
 
   syscall(2) invocation: alternative
 
   syscall(2) invocation: alternative
 
   vserver(2) syscall#: 273/glibc
 
   vserver(2) syscall#: 273/glibc
 
+
 
   Paths:
 
   Paths:
 
   prefix: /usr/local
 
   prefix: /usr/local
Line 43: Line 41:
 
   vserver-Rootdir: /vservers
 
   vserver-Rootdir: /vservers
 
   #
 
   #
</code>
 
  
 
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.
 
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.
 
   
 
   
<code>
 
 
   # vserver v345 start
 
   # vserver v345 start
 
   Starting system logger:                                    [  OK  ]
 
   Starting system logger:                                    [  OK  ]
Line 74: Line 70:
 
   sh-2.05b#
 
   sh-2.05b#
 
   .........
 
   .........
</code>
 
  
 
As a result we obtain running virtual environment v345:
 
As a result we obtain running virtual environment v345:
  
<code>
 
 
   # vserver-stat
 
   # vserver-stat
 
+
 
   CTX  PROC    VSZ    RSS  userTIME  sysTIME    UPTIME NAME
 
   CTX  PROC    VSZ    RSS  userTIME  sysTIME    UPTIME NAME
 
   0      51  90.9M  26.3M  0m58s75  2m42s57  33m45s93 root server
 
   0      51  90.9M  26.3M  0m58s75  2m42s57  33m45s93 root server
 
   49153    4  10.2M  2.8M  0m00s00  0m00s11  21m45s42 v345
 
   49153    4  10.2M  2.8M  0m00s00  0m00s11  21m45s42 v345
 
+
 
   #  
 
   #  
</code>
 
 
2. Starting migration to OpenVZ: downloading and installing the stable OpenVZ kernel.
 
 
First of all we should download and install the latest stable version of OpenVZ kernel:
 
  
<code>
+
=== Starting migration to OpenVZ ===
  #  wget http://openvz.org/download/kernel/stable/ovzkernel-2.6.9-023stab032.1.i686.rpm
 
  #  rpm -ihv ovzkernel-2.6.9-023stab032.1.i686.rpm
 
</code>
 
  
If you use grub bootloader, then before installing the OpenVZ kernel file /boot/grub/grub.conf looked like this:
+
Downloading and installing the stable OpenVZ kernel.
  
<code>
+
Install the OpenVZ kernel, as described in [[Quick installation]].  
  # grub.conf generated by anaconda
 
  #
 
  # Note that you do not have to rerun grub after making changes to this file
 
  # NOTICE:  You have a /boot partition.  This means that
 
  #          all kernel and initrd paths are relative to /boot/, eg.
 
  #          root (hd0,0)
 
  #          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
 
  #          initrd /initrd-version.img
 
  #boot=/dev/sda
 
  default=0
 
  timeout=5
 
  splashimage=(hd0,0)/grub/splash.xpm.gz
 
  hiddenmenu
 
  title Red Hat Enterprise Linux AS (2.6.17.13-vs2.0.2.1)
 
          root (hd0,0)
 
          kernel /vmlinuz-2.6.17.13-vs2.0.2.1 ro root=/dev/VolGroup00/LogVol00
 
          initrd /initrd-2.6.17.13-vs2.0.2.1.img
 
</code>
 
  
After installing it should look like this:
+
After the kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call:
  
<code>
 
  # grub.conf generated by anaconda
 
  #
 
  # Note that you do not have to rerun grub after making changes to this file
 
  # NOTICE:  You have a /boot partition.  This means that
 
  #          all kernel and initrd paths are relative to /boot/, eg.
 
  #          root (hd0,0)
 
  #          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
 
  #          initrd /initrd-version.img
 
  #boot=/dev/sda
 
  default=0
 
  timeout=5
 
  splashimage=(hd0,0)/grub/splash.xpm.gz
 
  hiddenmenu
 
  title Red Hat Enterprise Linux AS (2.6.9-023stab032.1)
 
          root (hd0,0)
 
          kernel /vmlinuz-2.6.9-023stab032.1 ro root=/dev/VolGroup00/LogVol00
 
          initrd /initrd-2.6.9-023stab032.1.img
 
  title Red Hat Enterprise Linux AS (2.6.17.13-vs2.0.2.1)
 
          root (hd0,0)
 
          kernel /vmlinuz-2.6.17.13-vs2.0.2.1 ro root=/dev/VolGroup00/LogVol00
 
          initrd /initrd-2.6.17.13-vs2.0.2.1.img
 
</code>
 
 
And the default parameter should be set to 0 in purpose to make the OpenVZ kernel work by default after rebooting the machine.
 
 
Some more manual configuring needed on this step:
 
Update the /etc/sysctl.conf file according to the following listing ( Comment all other settings that are not presented
 
in listing below ):
 
 
<code>
 
  # On Hardware Node we generally need
 
  # packet forwarding enabled and proxy arp disabled
 
  net.ipv4.ip_forward = 1
 
  net.ipv4.conf.default.proxy_arp = 0
 
  # Enables source route verification
 
  net.ipv4.conf.all.rp_filter = 1
 
  # Enables the magic-sysrq key
 
  kernel.sysrq = 1
 
  # TCP Explict Congestion Notification
 
  #net.ipv4.tcp_ecn = 0
 
  # we do not want all our interfaces to send redirects
 
  net.ipv4.conf.default.send_redirects = 1
 
  net.ipv4.conf.all.send_redirects = 0
 
</code>
 
 
As the SELinux should be disabled, put the following line to /etc/sysconfig/selinux:
 
 
<code>
 
  SELINUX=disabled
 
</code>
 
 
Now reboot the machine. After rebooting and loggin in you will see the following reply on vserver-stat call:
 
 
<code>
 
 
   # vserver-stat
 
   # vserver-stat
 
   can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented
 
   can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented
  #
 
</code>
 
  
It is a natural thing that now virtual environments v345 is unavailable. The following steps will be devoted to making them
+
It is a natural thing that now virtual environment v345 is unavailable. The following steps will be devoted to making it
 
work over OpenVZ kernel.
 
work over OpenVZ kernel.
  
3. Downloading and istalling vzctl package
+
=== Downloading and installing vzctl package ===
  
OpenVZ solution reqires installing a set of tools: vzctl and vzquota packages. Let us download and install this tools:
+
OpenVZ solution requires installing a set of tools: vzctl and vzquota packages. Download and install it, as described in [[quick installation]].
 
 
<code>
 
  # wget http://openvz.org/download/utils/vzctl-3.0.13-1.i386.rpm
 
  # wget http://openvz.org/download/utils/vzquota-3.0.9-1.i386.rpm
 
  # rpm -Uhv vzctl-3.0.13-1.i386.rpm vzquota-3.0.9-1.i386.rpm
 
</code>
 
  
 
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation.
 
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation.
 
Then launch the OpenVZ:
 
Then launch the OpenVZ:
  
<code>
 
 
   # /sbin/service vz start
 
   # /sbin/service vz start
 
   Starting OpenVZ:                                          [  OK  ]
 
   Starting OpenVZ:                                          [  OK  ]
 
   Bringing up interface venet0:                              [  OK  ]
 
   Bringing up interface venet0:                              [  OK  ]
 
   Configuring interface venet0:                              [  OK  ]
 
   Configuring interface venet0:                              [  OK  ]
  #
 
</code>
 
  
Currently vzlist util is unable to find any container:
+
Currently vzlist utility is unable to find any containers:
 
 
<code>
 
 
   # vzlist
 
   # vzlist
 
   Containers not found
 
   Containers not found
  #
 
</code>
 
  
4. Updating different configurations in purpose to make existing templates work
+
=== Updating different configurations ===
  
Move the existing templates of guest OSes to the right place:
+
Get the existing guest OSs to the right place:
  
<code>
 
 
   # cd /vz
 
   # cd /vz
 
   # mkdir private
 
   # mkdir private
   # mkdir 345
+
   # mkdir private/345
 
   # mv /vservers/v345 /vz/private/345
 
   # mv /vservers/v345 /vz/private/345
</code>
+
In Debian Lenny the path is /var/lib/vz/private/345 instead.
 +
In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it there
 +
instead of moving:
 +
  # mkdir /var/lib/vz/private/345
 +
  # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345
  
 
Now it is time for creating configuration files for OpenVZ container. Use the basic sample
 
Now it is time for creating configuration files for OpenVZ container. Use the basic sample
 
configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:
 
configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:
  
<code>
 
 
   # cd /etc/sysconfig/vz-scripts
 
   # cd /etc/sysconfig/vz-scripts
 
   # cp ve-vps.basic.conf-sample 345.conf
 
   # cp ve-vps.basic.conf-sample 345.conf
</code>
+
In Debian Lenny the configuration is located in /etc/vz/conf/ , in this case type:
 +
  # cd /etc/vz/conf
 +
  # cp ve-vps.basic.conf-sample  345.conf
  
Update the ON_BOOT string in 345.conf file by typing:
+
Now, let's set some parameters for the new container.
  
<code>
+
First, we need to tell which distro the container is running:
  .....
+
   # echo "OSTEMPLATE=\"fedora-core-4\"" >> 345.conf
  ONBOOT="yes"
+
   # echo "OSTEMPLATE=\"debian\"" >> 345.conf    (for Debian Lenny)
  .....
 
</code>
 
to make it boot on node restart, and add a couple of strings related to the
 
particular container 345:
 
<code>
 
   .....
 
  VE_ROOT="/vz/root/345"
 
  VE_PRIVATE="/vz/private/345"
 
   ORIGIN_SAMPLE="vps.basic"
 
  HOSTNAME="test345.my.org"
 
  IP_ADDRESS="192.168.0.145"
 
  .....
 
</code>
 
  
And reboot the machine:
+
Then we set a few more parameters:
 +
  vzctl set 345 --onboot yes --save # to make it start upon reboot
 +
  vzctl set 345 --ipadd 192.168.0.145 --save
 +
  vzctl set 345 --hostname test345.my.org --save
  
<code>
+
== Testing how the guest OSs successfully work over OpenVZ ==
  # reboot
 
</code>
 
  
5. Testing how the guest OSes succesfully work over OpenVZ. Reference to Users Guide of OpenVZ (vzctl).
+
Now you can start a container:
  
After rebooting you will be able to see running container 345 that have been
+
# vzctl start 345
migrated from vserver:
 
  
<code>
+
and see if it's running:
 
   # vzlist -a
 
   # vzlist -a
 
   CTID      NPROC  STATUS  IP_ADDR        HOSTNAME
 
   CTID      NPROC  STATUS  IP_ADDR        HOSTNAME
 
   345          5  running 192.168.0.145  test345.my.org
 
   345          5  running 192.168.0.145  test345.my.org
  #
 
</code>
 
  
And run commands on it:
+
You can run commands in it:
  
<code>
 
 
   # vzctl exec 345 ls -l
 
   # vzctl exec 345 ls -l
 
   total 48
 
   total 48
Line 290: Line 176:
 
   drwxr-xr-x  15 root    root        4096 Jul 27  2004 usr
 
   drwxr-xr-x  15 root    root        4096 Jul 27  2004 usr
 
   drwxr-xr-x  17 root    root        4096 Oct 26  2004 var
 
   drwxr-xr-x  17 root    root        4096 Oct 26  2004 var
  #
 
</code>
 
  
Potential issues:
+
== Issues ==
This work has in progress status. Some issues may take place with tuning the network for containers. To be continued.
+
 
 +
=== Starting ===
 +
 
 +
If starting fails with a message '''Unable to start init, probably incorrect template''' either the /sbin/init file is missing in the guest file system, or just not executable. Or the guest file system is completely absent or dislocated. The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz.
 +
 
 +
=== Networking ===
 +
 
 +
==== Starting networking in VEs ====
 +
 
 +
The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):
 +
 
 +
update-rc.d networking defaults
 +
 
 +
==== Migrating your VServer Shorewall setup ====
 +
 
 +
If you had the [http://www.shorewall.net/ Shorewall firewall] running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with Vserver (i.e. running <code>vnet</code> interfaces, not <code>veth</code> ones) :
 +
* do not use the <code>venet0</code> interface in Shorewall's configuration as the <code>vz</code> service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use <code>detect</code> for the broadcast in <code>/etc/shorewall/interfaces</code>.
 +
* for your VEs to be able to talk to each other, use the <code>routeback</code> option for <code>venet0</code> (and others) in <code>/etc/shorewall/interfaces</code>.
 +
 
 +
==== IP src from VEs ====
 +
 
 +
If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in <code>/etc/vz/vz.conf</code> :
 +
<pre>VE_ROUTE_SRC_DEV="iface_name"</pre>
 +
 
 +
=== Disk space information ===
 +
 
 +
Disk space information is empty. Do the following to fix:
 +
rm /etc/mtab
 +
ln -s /proc/mounts /etc/mtab
 +
 
 +
=== /dev ===
 +
 
 +
Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container.
 +
 
 +
=== Ubuntu udev ===
 +
 
 +
Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:
 +
 
 +
# vzctl enter 345
 +
enter into CT 345 failed
 +
Unable to open pty: No such file or directory
 +
 
 +
The fix is to remove the udev package from the guest:
 +
 
 +
 
 +
# vzctl exec 345 'dpkg --force-depends --purge udev'
 +
(Reading database ... dpkg: udev: dependency problems, but removing anyway as you request:
 +
  initramfs-tools depends on udev (>= 117-5).
 +
15227 files and directories currently installed.)
 +
Removing udev ...
 +
Purging configuration files for udev ...
 +
dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed.
 +
dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.
 +
 
 +
 
 +
Now restart the container, you should now be able to use the console.
 +
 
 +
 
 +
# vzctl restart 345
 +
Restarting container
 +
...
 +
  <SNIP>
 +
...
 +
Container start in progress...
 +
 
 +
# vzctl enter 345
 +
entered into CT 345
 +
root@test:/#
 +
 
 +
=== /proc ===
 +
 
 +
The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be done, is by sticking following command at the end of /etc/init.d/bootmisc.sh:
 +
mount /proc
  
 
[[Category:HOWTO]]
 
[[Category:HOWTO]]

Latest revision as of 09:34, 4 August 2010

This article describes the migration from Linux-VServer to OpenVZ.

Details of migration process[edit]

Initial conditions[edit]

The following example of Linux-VServer based solution was used for the experiment:

  • Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;
  • Util-vserver-0.30.211 tools were used for creating containers;
 # vserver-info
 Versions:
 Kernel: 2.6.17.13-vs2.0.2.1
 VS-API: 0x00020002
 util-vserver: 0.30.211; Dec  5 2006, 17:10:21

 Features:
 CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
 CXX: g++, g++ (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
 CPPFLAGS: 
 CFLAGS: '-g -O2 -std=c99 -Wall -pedantic -W -funit-at-a-time'
 CXXFLAGS: '-g -O2 -ansi -Wall -pedantic -W -fmessage-length=0 -funit-at-a-time'
 build/host: i686-pc-linux-gnu/i686-pc-linux-gnu
 Use dietlibc: yes
 Build C++ programs: yes
 Build C99 programs: yes
 Available APIs: v13,net
 ext2fs Source: kernel
 syscall(2) invocation: alternative
 vserver(2) syscall#: 273/glibc

 Paths:
 prefix: /usr/local
 sysconf-Directory: ${prefix}/etc
 cfg-Directory: ${prefix}/etc/vservers
 initrd-Directory: $(sysconfdir)/init.d
 pkgstate-Directory: ${prefix}/var/run/vservers
 vserver-Rootdir: /vservers
 #

VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.

 # vserver v345 start
 Starting system logger:                                    [  OK  ]
 Initializing random number generator:                      [  OK  ]
 Starting crond: l:                                         [  OK  ]
 Starting atd:                                              [  OK  ]
 # vserver v345 enter
 [/]# ls -l
 total 44
 drwxr-xr-x    2 root     root         4096 Oct 26  2004 bin
 drwxr-xr-x    3 root     root         4096 Dec  8 17:16 dev
 drwxr-xr-x   27 root     root         4096 Dec  8 15:21 etc
 -rw-r--r--    1 root     root            0 Dec  8 15:33 halt
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 home
 drwxr-xr-x    7 root     root         4096 Oct 26  2004 lib
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 mnt
 drwxr-xr-x    3 root     root         4096 Oct 26  2004 opt
 -rw-r--r--    1 root     root            0 Dec  7 20:17 poweroff
 dr-xr-xr-x   80 root     root            0 Dec  8 11:38 proc
 drwxr-x---    2 root     root         4096 Dec  7 20:17 root
 drwxr-xr-x    2 root     root         4096 Oct 26  2004 sbin
 drwxrwxrwt    2 root     root           40 Dec  8 17:16 tmp
 drwxr-xr-x   15 root     root         4096 Jul 27  2004 usr
 drwxr-xr-x   17 root     root         4096 Oct 26  2004 var
 [/]# sh
 sh-2.05b#
 .........

As a result we obtain running virtual environment v345:

 # vserver-stat

 CTX   PROC    VSZ    RSS  userTIME   sysTIME    UPTIME NAME
 0       51  90.9M  26.3M   0m58s75   2m42s57  33m45s93 root server
 49153    4  10.2M   2.8M   0m00s00   0m00s11  21m45s42 v345

 # 

Starting migration to OpenVZ[edit]

Downloading and installing the stable OpenVZ kernel.

Install the OpenVZ kernel, as described in Quick installation.

After the kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call:

 # vserver-stat
 can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented

It is a natural thing that now virtual environment v345 is unavailable. The following steps will be devoted to making it work over OpenVZ kernel.

Downloading and installing vzctl package[edit]

OpenVZ solution requires installing a set of tools: vzctl and vzquota packages. Download and install it, as described in quick installation.

If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation. Then launch the OpenVZ:

 # /sbin/service vz start
 Starting OpenVZ:                                           [  OK  ]
 Bringing up interface venet0:                              [  OK  ]
 Configuring interface venet0:                              [  OK  ]

Currently vzlist utility is unable to find any containers:

 # vzlist
 Containers not found

Updating different configurations[edit]

Get the existing guest OSs to the right place:

 # cd /vz
 # mkdir private
 # mkdir private/345
 # mv /vservers/v345 /vz/private/345

In Debian Lenny the path is /var/lib/vz/private/345 instead. In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it there instead of moving:

 # mkdir /var/lib/vz/private/345
 # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345

Now it is time for creating configuration files for OpenVZ container. Use the basic sample configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:

 # cd /etc/sysconfig/vz-scripts
 # cp ve-vps.basic.conf-sample 345.conf

In Debian Lenny the configuration is located in /etc/vz/conf/ , in this case type:

 # cd /etc/vz/conf
 # cp ve-vps.basic.conf-sample  345.conf

Now, let's set some parameters for the new container.

First, we need to tell which distro the container is running:

 # echo "OSTEMPLATE=\"fedora-core-4\"" >> 345.conf
 # echo "OSTEMPLATE=\"debian\"" >> 345.conf    (for Debian Lenny)

Then we set a few more parameters:

 vzctl set 345 --onboot yes --save # to make it start upon reboot
 vzctl set 345 --ipadd 192.168.0.145 --save
 vzctl set 345 --hostname test345.my.org --save

Testing how the guest OSs successfully work over OpenVZ[edit]

Now you can start a container:

# vzctl start 345

and see if it's running:

 # vzlist -a
 CTID      NPROC  STATUS  IP_ADDR         HOSTNAME
 345          5   running 192.168.0.145   test345.my.org

You can run commands in it:

 # vzctl exec 345 ls -l
 total 48
 drwxr-xr-x    2 root     root         4096 Oct 26  2004 bin
 drwxr-xr-x    3 root     root         4096 Dec 11 12:42 dev
 drwxr-xr-x   27 root     root         4096 Dec 11 12:44 etc
 -rw-r--r--    1 root     root            0 Dec 11 12:13 fastboot
 -rw-r--r--    1 root     root            0 Dec  8 15:33 halt
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 home
 drwxr-xr-x    7 root     root         4096 Oct 26  2004 lib
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 mnt
 drwxr-xr-x    3 root     root         4096 Oct 26  2004 opt
 -rw-r--r--    1 root     root            0 Dec  7 20:17 poweroff
 dr-xr-xr-x   70 root     root            0 Dec 11 12:42 proc
 drwxr-x---    2 root     root         4096 Dec  7 20:17 root
 drwxr-xr-x    2 root     root         4096 Dec 11 12:13 sbin
 drwxrwxrwt    2 root     root         4096 Dec  8 12:40 tmp
 drwxr-xr-x   15 root     root         4096 Jul 27  2004 usr
 drwxr-xr-x   17 root     root         4096 Oct 26  2004 var

Issues[edit]

Starting[edit]

If starting fails with a message Unable to start init, probably incorrect template either the /sbin/init file is missing in the guest file system, or just not executable. Or the guest file system is completely absent or dislocated. The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz.

Networking[edit]

Starting networking in VEs[edit]

The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):

update-rc.d networking defaults

Migrating your VServer Shorewall setup[edit]

If you had the Shorewall firewall running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with Vserver (i.e. running vnet interfaces, not veth ones) :

  • do not use the venet0 interface in Shorewall's configuration as the vz service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use detect for the broadcast in /etc/shorewall/interfaces.
  • for your VEs to be able to talk to each other, use the routeback option for venet0 (and others) in /etc/shorewall/interfaces.

IP src from VEs[edit]

If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in /etc/vz/vz.conf :

VE_ROUTE_SRC_DEV="iface_name"

Disk space information[edit]

Disk space information is empty. Do the following to fix:

rm /etc/mtab
ln -s /proc/mounts /etc/mtab

/dev[edit]

Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container.

Ubuntu udev[edit]

Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:

# vzctl enter 345
enter into CT 345 failed
Unable to open pty: No such file or directory

The fix is to remove the udev package from the guest:


# vzctl exec 345 'dpkg --force-depends --purge udev'
(Reading database ... dpkg: udev: dependency problems, but removing anyway as you request:
 initramfs-tools depends on udev (>= 117-5).
15227 files and directories currently installed.)
Removing udev ...
Purging configuration files for udev ...
dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed.
dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.


Now restart the container, you should now be able to use the console.


# vzctl restart 345
Restarting container
...
 <SNIP>
...
Container start in progress...
# vzctl enter 345
entered into CT 345
root@test:/#

/proc[edit]

The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be done, is by sticking following command at the end of /etc/init.d/bootmisc.sh:

mount /proc