Changes

Jump to: navigation, search

Migration from Linux-VServer to OpenVZ

3,363 bytes added, 09:34, 4 August 2010
Issues
{{Roughstub}}
Current document This article describes the migration from Linux-VServer based virtualization solution to OpenVZ.
Description == Details of challenge:migration process ==
The challenge is migration from Linux-Vserver to OpenVZ by booting the OpenVZ kernel and updating the existing configs of utility level in purpose to make the existing guest OSes work over OpenVZ kernel.=== Initial conditions ===
Details of migration process. Step by step: 1. Initial conditions: the The following example of Linux-VServer based solution was used for the experiment:
* Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;
* Util-vserver-0.30.211 tools were used for creating containers;
<code>
# vserver-info
Versions:
vserver-Rootdir: /vservers
#
</code>
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.
<code>
# vserver v345 start
Starting system logger: [ OK ]
sh-2.05b#
.........
</code>
As a result we obtain running virtual environment v345:
<code>
# vserver-stat
#
</code>
2. === Starting migration to OpenVZ: downloading and installing the stable OpenVZ kernel.===
Downloading and installing the stable OpenVZ kernel. Install the OpenVZ kernel, as described in [[quick Quick installation]].
After the kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call:
<code>
# vserver-stat
can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented
#
</code>
It is a natural thing that now virtual environment v345 is unavailable. The following steps will be devoted to making it
work over OpenVZ kernel.
3. === Downloading and installing vzctl package===
OpenVZ solution requires installing a set of tools: vzctl and vzquota packages. Download and install it, as described in [[quick installation]].
Then launch the OpenVZ:
<code>
# /sbin/service vz start
Starting OpenVZ: [ OK ]
Bringing up interface venet0: [ OK ]
Configuring interface venet0: [ OK ]
#
</code>
Currently vzlist utility is unable to find any containers:
 
<code>
# vzlist
Containers not found
#
</code>
4. === Updating different configurations in purpose to make existing templates work ===
Move Get the existing templates of guest OSs to the right place:
<code>
# cd /vz
# mkdir private
# mkdir private/345
# mv /vservers/v345 /vz/private/345
<In Debian Lenny the path is /code>var/lib/vz/private/345 instead.In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it thereinstead of moving: # mkdir /var/lib/vz/private/345 # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345
Now it is time for creating configuration files for OpenVZ container. Use the basic sample
configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:
<code>
# cd /etc/sysconfig/vz-scripts
# cp ve-vps.basic.conf-sample 345.conf
<In Debian Lenny the configuration is located in /code>etc/vz/conf/ , in this case type: # cd /etc/vz/conf # cp ve-vps.basic.conf-sample 345.conf
Update Now, let's set some parameters for the ON_BOOT string in 345new container.conf file by typing:
<code> ..... ONBOOT="yes" .....</code>to make it boot on node restartFirst, and add a couple of strings related we need to tell which distro theparticular container 345is running:<code> ..... VE_ROOT# echo "OSTEMPLATE=\"/vz/root/345fedora-core-4\" VE_PRIVATE="/vz/private/>> 345".conf ORIGIN_SAMPLE="vps.basic# echo " HOSTNAMEOSTEMPLATE=\"test345.my.org" IP_ADDRESS=debian\"192.168.0.145" ....>> 345.</code>conf (for Debian Lenny)
And Then we set a few more parameters: vzctl set 345 --onboot yes --save # to make it start upon reboot the machine: vzctl set 345 --ipadd 192.168.0.145 --save vzctl set 345 --hostname test345.my.org --save
<code> # reboot</code>== Testing how the guest OSs successfully work over OpenVZ ==
5. Testing how the guest OSs successfully work over OpenVZ. Reference to Users Guide of OpenVZ (vzctl).Now you can start a container:
After rebooting you will be able to see running container # vzctl start 345 that have beenmigrated from vserver:
<code>and see if it's running:
# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
345 5 running 192.168.0.145 test345.my.org
#
</code>
And You can run commands on in it:
<code>
# vzctl exec 345 ls -l
total 48
drwxr-xr-x 15 root root 4096 Jul 27 2004 usr
drwxr-xr-x 17 root root 4096 Oct 26 2004 var
#
</code>
Potential issues:== Issues == === Starting === This work has If starting fails with a message '''Unable to start init, probably incorrect template''' either the /sbin/init file is missing in progress statusthe guest file system, or just not executable. Or the guest file system is completely absent or dislocated. Some issues may take The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz. === Networking === ==== Starting networking in VEs ==== The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):  update-rc.d networking defaults ==== Migrating your VServer Shorewall setup ==== If you had the [http://www.shorewall.net/ Shorewall firewall] running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with tuning Vserver (i.e. running <code>vnet</code> interfaces, not <code>veth</code> ones) :* do not use the <code>venet0</code> interface in Shorewall's configuration as the <code>vz</code> service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use <code>detect</code> for the broadcast in <code>/etc/shorewall/interfaces</code>.* for your VEs to be able to talk to each other, use the <code>routeback</code> option for <code>venet0</code> (and others) in <code>/etc/shorewall/interfaces</code>. ==== IP src from VEs ==== If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in <code>/etc/vz/vz.conf</code> :<pre>VE_ROUTE_SRC_DEV="iface_name"</pre> === Disk space information === Disk space information is empty. Do the following to fix: rm /etc/mtab ln -s /proc/mounts /etc/mtab === /dev === Vserver mounts /dev/pts filesystem for containerscontainer transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container. === Ubuntu udev === Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:  # vzctl enter 345 enter into CT 345 failed Unable to open pty: No such file or directory The fix is to remove the udev package from the guest:   # vzctl exec 345 'dpkg --force-depends --purge udev' (Reading database ... dpkg: udev: dependency problems, but removing anyway as you request: initramfs-tools depends on udev (>= 117-5). 15227 files and directories currently installed.) Removing udev ... Purging configuration files for udev ... dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed. dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.  Now restart the container, you should now be able to use the console.   # vzctl restart 345 Restarting container ... <SNIP> ... Container start in progress...  # vzctl enter 345 entered into CT 345 root@test:/# === /proc === The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be continueddone, is by sticking following command at the end of /etc/init.d/bootmisc.sh: mount /proc
[[Category:HOWTO]]
Anonymous user

Navigation menu