Changes

Jump to: navigation, search

Migration from Linux-VServer to OpenVZ

373 bytes added, 09:34, 4 August 2010
Issues
Current document describes the migration from Linux-VServer based virtualization solution to OpenVZ.{{Roughstub}}
Description of challenge:This article describes the migration from Linux-VServer to OpenVZ.
The challenge is == Details of migration from Linux-Vserver to OpenVZ by booting the OpenVZ kernel and updating the existing configs of utility level in purpose to make the existing guest OSes work over OpenVZ kernel.process ==
Details of migration process. Step by step:=== Initial conditions ===
1. Initial conditions: the The following example of Linux-VServer based solution was used for the experiment:
- * Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;- * Util-vserver-0.30.211 tools were used for creating containers;
<code>
# vserver-info
Versions:
VS-API: 0x00020002
util-vserver: 0.30.211; Dec 5 2006, 17:10:21
Features:
CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
syscall(2) invocation: alternative
vserver(2) syscall#: 273/glibc
Paths:
prefix: /usr/local
vserver-Rootdir: /vservers
#
</code>
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.
<code>
# vserver v345 start
Starting system logger: [ OK ]
sh-2.05b#
.........
</code>
As a result we obtain running virtual environment v345:
<code>
# vserver-stat
CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME
0 51 90.9M 26.3M 0m58s75 2m42s57 33m45s93 root server
49153 4 10.2M 2.8M 0m00s00 0m00s11 21m45s42 v345
#
</code>
 
2. Starting migration to OpenVZ: downloading and installing the stable OpenVZ kernel.
 
First of all we should download and install the latest stable version of OpenVZ kernel:
<code> # wget http://openvz.org/download/kernel/stable/ovzkernel-2.6.9-023stab032.1.i686.rpm # rpm -ihv ovzkernel-2.6.9-023stab032.1.i686.rpm</code>=== Starting migration to OpenVZ ===
If you use grub bootloader, then before Downloading and installing the stable OpenVZ kernel file /boot/grub/grub.conf looked like this:
<code> # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # Install the OpenVZ kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux AS (2.6.17.13-vs2.0.2.1) root (hd0,0) kernel /vmlinuz-2.6.17.13-vs2.0.2.1 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.17.13-vs2.0.2.1as described in [[Quick installation]].img</code>
After installing it should look like thisthe kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call:
<code>
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux AS (2.6.9-023stab032.1)
root (hd0,0)
kernel /vmlinuz-2.6.9-023stab032.1 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.9-023stab032.1.img
title Red Hat Enterprise Linux AS (2.6.17.13-vs2.0.2.1)
root (hd0,0)
kernel /vmlinuz-2.6.17.13-vs2.0.2.1 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.17.13-vs2.0.2.1.img
</code>
 
And the default parameter should be set to 0 in purpose to make the OpenVZ kernel work by default after rebooting the machine.
 
Some more manual configuring needed on this step:
Update the /etc/sysctl.conf file according to the following listing ( Comment all other settings that are not presented
in listing below ):
 
<code>
# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv4.conf.default.proxy_arp = 0
# Enables source route verification
net.ipv4.conf.all.rp_filter = 1
# Enables the magic-sysrq key
kernel.sysrq = 1
# TCP Explict Congestion Notification
#net.ipv4.tcp_ecn = 0
# we do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
</code>
 
As the SELinux should be disabled, put the following line to /etc/sysconfig/selinux:
 
<code>
SELINUX=disabled
</code>
 
Now reboot the machine. After rebooting and loggin in you will see the following reply on vserver-stat call:
 
<code>
# vserver-stat
can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented
#
</code>
It is a natural thing that now virtual environments environment v345 is unavailable. The following steps will be devoted to making themit
work over OpenVZ kernel.
3. === Downloading and istalling installing vzctl package===
OpenVZ solution reqires requires installing a set of tools: vzctl and vzquota packages. Let us download Download and install this tools: <code> # wget http://openvz.org/download/utils/vzctl-3.0.13-1.i386.rpm # wget http://openvz.org/download/utils/vzquota-3.0.9-1.i386.rpm # rpm -Uhv vzctl-3.0.13-1.i386.rpm vzquota-3.0.9-1.i386it, as described in [[quick installation]].rpm</code>
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation.
Then launch the OpenVZ:
<code>
# /sbin/service vz start
Starting OpenVZ: [ OK ]
Bringing up interface venet0: [ OK ]
Configuring interface venet0: [ OK ]
#
</code>
Currently vzlist util utility is unable to find any VEcontainers<code>
# vzlist
VE Containers not found #</code>
4. === Updating different configurations in purpose to make existing templates work ===
Move Get the existing templates of guest OSes OSs to the right place:
<code>
# cd /vz
# mkdir private
# mkdir private/345
# mv /vservers/v345 /vz/private/345
<In Debian Lenny the path is /code>var/lib/vz/private/345 instead.In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it thereinstead of moving: # mkdir /var/lib/vz/private/345 # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345
Now it is time for creating configuration files for OpenVZ container. Use the basic sample
configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:
<code>
# cd /etc/sysconfig/vz-scripts
# cp ve-vps.basic.conf-sample 345.conf
<In Debian Lenny the configuration is located in /code>etc/vz/conf/ , in this case type: # cd /etc/vz/conf # cp ve-vps.basic.conf-sample 345.conf
Update Now, let's set some parameters for the ON_BOOT string in 345new container.conf file by typing:
<code> ..... ONBOOT="yes" .....</code>to make it boot on node restartFirst, and add a couple of strings related we need to tell which distro the particular 345 VEcontainer is running:<code> ..... VE_ROOT# echo "OSTEMPLATE=\"/vz/root/345fedora-core-4\" VE_PRIVATE="/vz/private/>> 345".conf ORIGIN_SAMPLE="vps.basic# echo " HOSTNAMEOSTEMPLATE=\"test345.my.org" IP_ADDRESS=debian\"192.168.0.145" ....>> 345.</code>conf (for Debian Lenny)
And Then we set a few more parameters: vzctl set 345 --onboot yes --save # to make it start upon reboot the machine: vzctl set 345 --ipadd 192.168.0.145 --save vzctl set 345 --hostname test345.my.org --save
<code> # reboot</code>== Testing how the guest OSs successfully work over OpenVZ ==
5. Testing how the guest OSes succesfully work over OpenVZ. reference to Users Guide of OpenVZ (vzctl).Now you can start a container:
After rebooting you will be able to see running VE # vzctl start 345 that have been migrated from vserver:
<code>and see if it's running:
# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
345 5 running 192.168.0.145 test345.my.org
#
</code>
And You can run commands on in it:
<code>
# vzctl exec 345 ls -l
total 48
drwxr-xr-x 15 root root 4096 Jul 27 2004 usr
drwxr-xr-x 17 root root 4096 Oct 26 2004 var
#
</code>
Potential issues:== Issues == === Starting === This work has If starting fails with a message '''Unable to start init, probably incorrect template''' either the /sbin/init file is missing in progress statusthe guest file system, or just not executable. Or the guest file system is completely absent or dislocated. Some issues may take The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz. === Networking === ==== Starting networking in VEs ==== The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):  update-rc.d networking defaults ==== Migrating your VServer Shorewall setup ==== If you had the [http://www.shorewall.net/ Shorewall firewall] running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with tuning Vserver (i.e. running <code>vnet</code> interfaces, not <code>veth</code> ones) :* do not use the <code>venet0</code> interface in Shorewall's configuration as the <code>vz</code> service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use <code>detect</code> for the broadcast in <code>/etc/shorewall/interfaces</code>.* for your VEs to be able to talk to each other, use the <code>routeback</code> option for <code>venet0</code> (and others) in <code>/etc/shorewall/interfaces</code>. ==== IP src from VEs ==== If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in <code>/etc/vz/vz.conf</code> :<pre>VE_ROUTE_SRC_DEV="iface_name"</pre> === Disk space information === Disk space information is empty. Do the following to fix: rm /etc/mtab ln -s /proc/mounts /etc/mtab === /dev === Vserver mounts /dev/pts filesystem for containerscontainer transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container. === Ubuntu udev === Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:  # vzctl enter 345 enter into CT 345 failed Unable to open pty: No such file or directory The fix is to remove the udev package from the guest:   # vzctl exec 345 'dpkg --force-depends --purge udev' (Reading database ... dpkg: udev: dependency problems, but removing anyway as you request: initramfs-tools depends on udev (>= 117-5). 15227 files and directories currently installed.) Removing udev ... Purging configuration files for udev ... dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed. dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.  Now restart the container, you should now be able to use the console.   # vzctl restart 345 Restarting container ... <SNIP> ... Container start in progress...  # vzctl enter 345 entered into CT 345 root@test:/# === /proc === The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be continueddone, is by sticking following command at the end of /etc/init.d/bootmisc.sh: mount /proc
[[Category:HOWTO]]
Anonymous user

Navigation menu