Difference between revisions of "Migration from Linux-VServer to OpenVZ"
Botinki Kira (talk | contribs) m (Robot: Automated text replacement (-Virtual Environment +container)) |
(→Issues) |
||
(18 intermediate revisions by 9 users not shown) | |||
Line 1: | Line 1: | ||
− | + | {{Roughstub}} | |
− | + | This article describes the migration from Linux-VServer to OpenVZ. | |
− | + | == Details of migration process == | |
− | |||
− | + | === Initial conditions === | |
− | + | The following example of Linux-VServer based solution was used for the experiment: | |
− | + | * Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild; | |
− | + | * Util-vserver-0.30.211 tools were used for creating containers; | |
− | |||
# vserver-info | # vserver-info | ||
Versions: | Versions: | ||
Line 19: | Line 17: | ||
VS-API: 0x00020002 | VS-API: 0x00020002 | ||
util-vserver: 0.30.211; Dec 5 2006, 17:10:21 | util-vserver: 0.30.211; Dec 5 2006, 17:10:21 | ||
− | + | ||
Features: | Features: | ||
CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4) | CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4) | ||
Line 34: | Line 32: | ||
syscall(2) invocation: alternative | syscall(2) invocation: alternative | ||
vserver(2) syscall#: 273/glibc | vserver(2) syscall#: 273/glibc | ||
− | + | ||
Paths: | Paths: | ||
prefix: /usr/local | prefix: /usr/local | ||
Line 43: | Line 41: | ||
vserver-Rootdir: /vservers | vserver-Rootdir: /vservers | ||
# | # | ||
− | |||
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4. | VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4. | ||
− | |||
# vserver v345 start | # vserver v345 start | ||
Starting system logger: [ OK ] | Starting system logger: [ OK ] | ||
Line 74: | Line 70: | ||
sh-2.05b# | sh-2.05b# | ||
......... | ......... | ||
− | |||
As a result we obtain running virtual environment v345: | As a result we obtain running virtual environment v345: | ||
− | |||
# vserver-stat | # vserver-stat | ||
− | + | ||
CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME | CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME | ||
0 51 90.9M 26.3M 0m58s75 2m42s57 33m45s93 root server | 0 51 90.9M 26.3M 0m58s75 2m42s57 33m45s93 root server | ||
49153 4 10.2M 2.8M 0m00s00 0m00s11 21m45s42 v345 | 49153 4 10.2M 2.8M 0m00s00 0m00s11 21m45s42 v345 | ||
− | + | ||
# | # | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | === Starting migration to OpenVZ === | |
− | |||
− | |||
− | |||
− | + | Downloading and installing the stable OpenVZ kernel. | |
− | + | Install the OpenVZ kernel, as described in [[Quick installation]]. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | After | + | After the kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call: |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
# vserver-stat | # vserver-stat | ||
can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented | can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented | ||
− | |||
− | |||
− | It is a natural thing that now virtual | + | It is a natural thing that now virtual environment v345 is unavailable. The following steps will be devoted to making it |
work over OpenVZ kernel. | work over OpenVZ kernel. | ||
− | + | === Downloading and installing vzctl package === | |
− | OpenVZ solution | + | OpenVZ solution requires installing a set of tools: vzctl and vzquota packages. Download and install it, as described in [[quick installation]]. |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation. | If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation. | ||
Then launch the OpenVZ: | Then launch the OpenVZ: | ||
− | |||
# /sbin/service vz start | # /sbin/service vz start | ||
Starting OpenVZ: [ OK ] | Starting OpenVZ: [ OK ] | ||
Bringing up interface venet0: [ OK ] | Bringing up interface venet0: [ OK ] | ||
Configuring interface venet0: [ OK ] | Configuring interface venet0: [ OK ] | ||
− | |||
− | |||
− | Currently vzlist | + | Currently vzlist utility is unable to find any containers: |
− | |||
− | |||
# vzlist | # vzlist | ||
− | + | Containers not found | |
− | |||
− | |||
− | + | === Updating different configurations === | |
− | + | Get the existing guest OSs to the right place: | |
− | |||
# cd /vz | # cd /vz | ||
# mkdir private | # mkdir private | ||
− | # mkdir 345 | + | # mkdir private/345 |
# mv /vservers/v345 /vz/private/345 | # mv /vservers/v345 /vz/private/345 | ||
− | + | In Debian Lenny the path is /var/lib/vz/private/345 instead. | |
+ | In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it there | ||
+ | instead of moving: | ||
+ | # mkdir /var/lib/vz/private/345 | ||
+ | # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345 | ||
− | Now it is time for creating configuration files for OpenVZ | + | Now it is time for creating configuration files for OpenVZ container. Use the basic sample |
configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file: | configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file: | ||
− | |||
# cd /etc/sysconfig/vz-scripts | # cd /etc/sysconfig/vz-scripts | ||
# cp ve-vps.basic.conf-sample 345.conf | # cp ve-vps.basic.conf-sample 345.conf | ||
− | + | In Debian Lenny the configuration is located in /etc/vz/conf/ , in this case type: | |
+ | # cd /etc/vz/conf | ||
+ | # cp ve-vps.basic.conf-sample 345.conf | ||
− | + | Now, let's set some parameters for the new container. | |
− | + | First, we need to tell which distro the container is running: | |
− | + | # echo "OSTEMPLATE=\"fedora-core-4\"" >> 345.conf | |
− | + | # echo "OSTEMPLATE=\"debian\"" >> 345.conf (for Debian Lenny) | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Then we set a few more parameters: | |
+ | vzctl set 345 --onboot yes --save # to make it start upon reboot | ||
+ | vzctl set 345 --ipadd 192.168.0.145 --save | ||
+ | vzctl set 345 --hostname test345.my.org --save | ||
− | + | == Testing how the guest OSs successfully work over OpenVZ == | |
− | |||
− | |||
− | + | Now you can start a container: | |
− | + | # vzctl start 345 | |
− | + | and see if it's running: | |
# vzlist -a | # vzlist -a | ||
CTID NPROC STATUS IP_ADDR HOSTNAME | CTID NPROC STATUS IP_ADDR HOSTNAME | ||
345 5 running 192.168.0.145 test345.my.org | 345 5 running 192.168.0.145 test345.my.org | ||
− | |||
− | |||
− | + | You can run commands in it: | |
− | |||
# vzctl exec 345 ls -l | # vzctl exec 345 ls -l | ||
total 48 | total 48 | ||
Line 288: | Line 176: | ||
drwxr-xr-x 15 root root 4096 Jul 27 2004 usr | drwxr-xr-x 15 root root 4096 Jul 27 2004 usr | ||
drwxr-xr-x 17 root root 4096 Oct 26 2004 var | drwxr-xr-x 17 root root 4096 Oct 26 2004 var | ||
− | |||
− | |||
− | + | == Issues == | |
− | + | ||
+ | === Starting === | ||
+ | |||
+ | If starting fails with a message '''Unable to start init, probably incorrect template''' either the /sbin/init file is missing in the guest file system, or just not executable. Or the guest file system is completely absent or dislocated. The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz. | ||
+ | |||
+ | === Networking === | ||
+ | |||
+ | ==== Starting networking in VEs ==== | ||
+ | |||
+ | The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container): | ||
+ | |||
+ | update-rc.d networking defaults | ||
+ | |||
+ | ==== Migrating your VServer Shorewall setup ==== | ||
+ | |||
+ | If you had the [http://www.shorewall.net/ Shorewall firewall] running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with Vserver (i.e. running <code>vnet</code> interfaces, not <code>veth</code> ones) : | ||
+ | * do not use the <code>venet0</code> interface in Shorewall's configuration as the <code>vz</code> service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use <code>detect</code> for the broadcast in <code>/etc/shorewall/interfaces</code>. | ||
+ | * for your VEs to be able to talk to each other, use the <code>routeback</code> option for <code>venet0</code> (and others) in <code>/etc/shorewall/interfaces</code>. | ||
+ | |||
+ | ==== IP src from VEs ==== | ||
+ | |||
+ | If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in <code>/etc/vz/vz.conf</code> : | ||
+ | <pre>VE_ROUTE_SRC_DEV="iface_name"</pre> | ||
+ | |||
+ | === Disk space information === | ||
+ | |||
+ | Disk space information is empty. Do the following to fix: | ||
+ | rm /etc/mtab | ||
+ | ln -s /proc/mounts /etc/mtab | ||
+ | |||
+ | === /dev === | ||
+ | |||
+ | Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container. | ||
+ | |||
+ | === Ubuntu udev === | ||
+ | |||
+ | Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem: | ||
+ | |||
+ | # vzctl enter 345 | ||
+ | enter into CT 345 failed | ||
+ | Unable to open pty: No such file or directory | ||
+ | |||
+ | The fix is to remove the udev package from the guest: | ||
+ | |||
+ | |||
+ | # vzctl exec 345 'dpkg --force-depends --purge udev' | ||
+ | (Reading database ... dpkg: udev: dependency problems, but removing anyway as you request: | ||
+ | initramfs-tools depends on udev (>= 117-5). | ||
+ | 15227 files and directories currently installed.) | ||
+ | Removing udev ... | ||
+ | Purging configuration files for udev ... | ||
+ | dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed. | ||
+ | dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed. | ||
+ | |||
+ | |||
+ | Now restart the container, you should now be able to use the console. | ||
+ | |||
+ | |||
+ | # vzctl restart 345 | ||
+ | Restarting container | ||
+ | ... | ||
+ | <SNIP> | ||
+ | ... | ||
+ | Container start in progress... | ||
+ | |||
+ | # vzctl enter 345 | ||
+ | entered into CT 345 | ||
+ | root@test:/# | ||
+ | |||
+ | === /proc === | ||
+ | |||
+ | The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be done, is by sticking following command at the end of /etc/init.d/bootmisc.sh: | ||
+ | mount /proc | ||
[[Category:HOWTO]] | [[Category:HOWTO]] |
Latest revision as of 09:34, 4 August 2010
This article describes the migration from Linux-VServer to OpenVZ.
Contents
Details of migration process[edit]
Initial conditions[edit]
The following example of Linux-VServer based solution was used for the experiment:
- Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;
- Util-vserver-0.30.211 tools were used for creating containers;
# vserver-info Versions: Kernel: 2.6.17.13-vs2.0.2.1 VS-API: 0x00020002 util-vserver: 0.30.211; Dec 5 2006, 17:10:21 Features: CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4) CXX: g++, g++ (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4) CPPFLAGS: CFLAGS: '-g -O2 -std=c99 -Wall -pedantic -W -funit-at-a-time' CXXFLAGS: '-g -O2 -ansi -Wall -pedantic -W -fmessage-length=0 -funit-at-a-time' build/host: i686-pc-linux-gnu/i686-pc-linux-gnu Use dietlibc: yes Build C++ programs: yes Build C99 programs: yes Available APIs: v13,net ext2fs Source: kernel syscall(2) invocation: alternative vserver(2) syscall#: 273/glibc Paths: prefix: /usr/local sysconf-Directory: ${prefix}/etc cfg-Directory: ${prefix}/etc/vservers initrd-Directory: $(sysconfdir)/init.d pkgstate-Directory: ${prefix}/var/run/vservers vserver-Rootdir: /vservers #
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.
# vserver v345 start Starting system logger: [ OK ] Initializing random number generator: [ OK ] Starting crond: l: [ OK ] Starting atd: [ OK ] # vserver v345 enter [/]# ls -l total 44 drwxr-xr-x 2 root root 4096 Oct 26 2004 bin drwxr-xr-x 3 root root 4096 Dec 8 17:16 dev drwxr-xr-x 27 root root 4096 Dec 8 15:21 etc -rw-r--r-- 1 root root 0 Dec 8 15:33 halt drwxr-xr-x 2 root root 4096 Jan 24 2003 home drwxr-xr-x 7 root root 4096 Oct 26 2004 lib drwxr-xr-x 2 root root 4096 Jan 24 2003 mnt drwxr-xr-x 3 root root 4096 Oct 26 2004 opt -rw-r--r-- 1 root root 0 Dec 7 20:17 poweroff dr-xr-xr-x 80 root root 0 Dec 8 11:38 proc drwxr-x--- 2 root root 4096 Dec 7 20:17 root drwxr-xr-x 2 root root 4096 Oct 26 2004 sbin drwxrwxrwt 2 root root 40 Dec 8 17:16 tmp drwxr-xr-x 15 root root 4096 Jul 27 2004 usr drwxr-xr-x 17 root root 4096 Oct 26 2004 var [/]# sh sh-2.05b# .........
As a result we obtain running virtual environment v345:
# vserver-stat CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME 0 51 90.9M 26.3M 0m58s75 2m42s57 33m45s93 root server 49153 4 10.2M 2.8M 0m00s00 0m00s11 21m45s42 v345 #
Starting migration to OpenVZ[edit]
Downloading and installing the stable OpenVZ kernel.
Install the OpenVZ kernel, as described in Quick installation.
After the kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call:
# vserver-stat can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented
It is a natural thing that now virtual environment v345 is unavailable. The following steps will be devoted to making it work over OpenVZ kernel.
Downloading and installing vzctl package[edit]
OpenVZ solution requires installing a set of tools: vzctl and vzquota packages. Download and install it, as described in quick installation.
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation. Then launch the OpenVZ:
# /sbin/service vz start Starting OpenVZ: [ OK ] Bringing up interface venet0: [ OK ] Configuring interface venet0: [ OK ]
Currently vzlist utility is unable to find any containers:
# vzlist Containers not found
Updating different configurations[edit]
Get the existing guest OSs to the right place:
# cd /vz # mkdir private # mkdir private/345 # mv /vservers/v345 /vz/private/345
In Debian Lenny the path is /var/lib/vz/private/345 instead. In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it there instead of moving:
# mkdir /var/lib/vz/private/345 # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345
Now it is time for creating configuration files for OpenVZ container. Use the basic sample configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:
# cd /etc/sysconfig/vz-scripts # cp ve-vps.basic.conf-sample 345.conf
In Debian Lenny the configuration is located in /etc/vz/conf/ , in this case type:
# cd /etc/vz/conf # cp ve-vps.basic.conf-sample 345.conf
Now, let's set some parameters for the new container.
First, we need to tell which distro the container is running:
# echo "OSTEMPLATE=\"fedora-core-4\"" >> 345.conf # echo "OSTEMPLATE=\"debian\"" >> 345.conf (for Debian Lenny)
Then we set a few more parameters:
vzctl set 345 --onboot yes --save # to make it start upon reboot vzctl set 345 --ipadd 192.168.0.145 --save vzctl set 345 --hostname test345.my.org --save
Testing how the guest OSs successfully work over OpenVZ[edit]
Now you can start a container:
# vzctl start 345
and see if it's running:
# vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 345 5 running 192.168.0.145 test345.my.org
You can run commands in it:
# vzctl exec 345 ls -l total 48 drwxr-xr-x 2 root root 4096 Oct 26 2004 bin drwxr-xr-x 3 root root 4096 Dec 11 12:42 dev drwxr-xr-x 27 root root 4096 Dec 11 12:44 etc -rw-r--r-- 1 root root 0 Dec 11 12:13 fastboot -rw-r--r-- 1 root root 0 Dec 8 15:33 halt drwxr-xr-x 2 root root 4096 Jan 24 2003 home drwxr-xr-x 7 root root 4096 Oct 26 2004 lib drwxr-xr-x 2 root root 4096 Jan 24 2003 mnt drwxr-xr-x 3 root root 4096 Oct 26 2004 opt -rw-r--r-- 1 root root 0 Dec 7 20:17 poweroff dr-xr-xr-x 70 root root 0 Dec 11 12:42 proc drwxr-x--- 2 root root 4096 Dec 7 20:17 root drwxr-xr-x 2 root root 4096 Dec 11 12:13 sbin drwxrwxrwt 2 root root 4096 Dec 8 12:40 tmp drwxr-xr-x 15 root root 4096 Jul 27 2004 usr drwxr-xr-x 17 root root 4096 Oct 26 2004 var
Issues[edit]
Starting[edit]
If starting fails with a message Unable to start init, probably incorrect template either the /sbin/init file is missing in the guest file system, or just not executable. Or the guest file system is completely absent or dislocated. The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz.
Networking[edit]
Starting networking in VEs[edit]
The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):
update-rc.d networking defaults
Migrating your VServer Shorewall setup[edit]
If you had the Shorewall firewall running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with Vserver (i.e. running vnet
interfaces, not veth
ones) :
- do not use the
venet0
interface in Shorewall's configuration as thevz
service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not usedetect
for the broadcast in/etc/shorewall/interfaces
. - for your VEs to be able to talk to each other, use the
routeback
option forvenet0
(and others) in/etc/shorewall/interfaces
.
IP src from VEs[edit]
If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in /etc/vz/vz.conf
:
VE_ROUTE_SRC_DEV="iface_name"
Disk space information[edit]
Disk space information is empty. Do the following to fix:
rm /etc/mtab ln -s /proc/mounts /etc/mtab
/dev[edit]
Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container.
Ubuntu udev[edit]
Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:
# vzctl enter 345 enter into CT 345 failed Unable to open pty: No such file or directory
The fix is to remove the udev package from the guest:
# vzctl exec 345 'dpkg --force-depends --purge udev' (Reading database ... dpkg: udev: dependency problems, but removing anyway as you request: initramfs-tools depends on udev (>= 117-5). 15227 files and directories currently installed.) Removing udev ... Purging configuration files for udev ... dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed. dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.
Now restart the container, you should now be able to use the console.
# vzctl restart 345 Restarting container ... <SNIP> ... Container start in progress...
# vzctl enter 345 entered into CT 345 root@test:/#
/proc[edit]
The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be done, is by sticking following command at the end of /etc/init.d/bootmisc.sh:
mount /proc