Difference between revisions of "Migration from Linux-VServer to OpenVZ"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (Robot: Automated text replacement (-VEID +CTID))
m (Robot: Automated text replacement (-VEs +containers))
Line 292: Line 292:
  
 
Potential issues:
 
Potential issues:
This work has in progress status. Some issues may take place with tuning the network for VEs. To be continued.
+
This work has in progress status. Some issues may take place with tuning the network for containers. To be continued.
  
 
[[Category:HOWTO]]
 
[[Category:HOWTO]]

Revision as of 09:47, 11 March 2008

Current document describes the migration from Linux-VServer based virtualization solution to OpenVZ.

Description of challenge:

The challenge is migration from Linux-Vserver to OpenVZ by booting the OpenVZ kernel and updating the existing configs of utility level in purpose to make the existing guest OSes work over OpenVZ kernel.

Details of migration process. Step by step:

1. Initial conditions: the following example of Linux-VServer based solution was used for the experiment:

- Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild; - Util-vserver-0.30.211 tools were used for creating Virtual Environments;

 # vserver-info
 Versions:
 Kernel: 2.6.17.13-vs2.0.2.1
 VS-API: 0x00020002
 util-vserver: 0.30.211; Dec  5 2006, 17:10:21
 Features:
 CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
 CXX: g++, g++ (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)
 CPPFLAGS: 
 CFLAGS: '-g -O2 -std=c99 -Wall -pedantic -W -funit-at-a-time'
 CXXFLAGS: '-g -O2 -ansi -Wall -pedantic -W -fmessage-length=0 -funit-at-a-time'
 build/host: i686-pc-linux-gnu/i686-pc-linux-gnu
 Use dietlibc: yes
 Build C++ programs: yes
 Build C99 programs: yes
 Available APIs: v13,net
 ext2fs Source: kernel
 syscall(2) invocation: alternative
 vserver(2) syscall#: 273/glibc
 Paths:
 prefix: /usr/local
 sysconf-Directory: ${prefix}/etc
 cfg-Directory: ${prefix}/etc/vservers
 initrd-Directory: $(sysconfdir)/init.d
 pkgstate-Directory: ${prefix}/var/run/vservers
 vserver-Rootdir: /vservers
 #

VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.

 # vserver v345 start
 Starting system logger:                                    [  OK  ]
 Initializing random number generator:                      [  OK  ]
 Starting crond: l:                                         [  OK  ]
 Starting atd:                                              [  OK  ]
 # vserver v345 enter
 [/]# ls -l
 total 44
 drwxr-xr-x    2 root     root         4096 Oct 26  2004 bin
 drwxr-xr-x    3 root     root         4096 Dec  8 17:16 dev
 drwxr-xr-x   27 root     root         4096 Dec  8 15:21 etc
 -rw-r--r--    1 root     root            0 Dec  8 15:33 halt
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 home
 drwxr-xr-x    7 root     root         4096 Oct 26  2004 lib
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 mnt
 drwxr-xr-x    3 root     root         4096 Oct 26  2004 opt
 -rw-r--r--    1 root     root            0 Dec  7 20:17 poweroff
 dr-xr-xr-x   80 root     root            0 Dec  8 11:38 proc
 drwxr-x---    2 root     root         4096 Dec  7 20:17 root
 drwxr-xr-x    2 root     root         4096 Oct 26  2004 sbin
 drwxrwxrwt    2 root     root           40 Dec  8 17:16 tmp
 drwxr-xr-x   15 root     root         4096 Jul 27  2004 usr
 drwxr-xr-x   17 root     root         4096 Oct 26  2004 var
 [/]# sh
 sh-2.05b#
 .........

As a result we obtain running virtual environment v345:

 # vserver-stat
 CTX   PROC    VSZ    RSS  userTIME   sysTIME    UPTIME NAME
 0       51  90.9M  26.3M   0m58s75   2m42s57  33m45s93 root server
 49153    4  10.2M   2.8M   0m00s00   0m00s11  21m45s42 v345
 # 

2. Starting migration to OpenVZ: downloading and installing the stable OpenVZ kernel.

First of all we should download and install the latest stable version of OpenVZ kernel:

 #  wget http://openvz.org/download/kernel/stable/ovzkernel-2.6.9-023stab032.1.i686.rpm
 #  rpm -ihv ovzkernel-2.6.9-023stab032.1.i686.rpm

If you use grub bootloader, then before installing the OpenVZ kernel file /boot/grub/grub.conf looked like this:

 # grub.conf generated by anaconda
 #
 # Note that you do not have to rerun grub after making changes to this file
 # NOTICE:  You have a /boot partition.  This means that
 #          all kernel and initrd paths are relative to /boot/, eg.
 #          root (hd0,0)
 #          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
 #          initrd /initrd-version.img
 #boot=/dev/sda
 default=0
 timeout=5
 splashimage=(hd0,0)/grub/splash.xpm.gz
 hiddenmenu
 title Red Hat Enterprise Linux AS (2.6.17.13-vs2.0.2.1)
         root (hd0,0)
         kernel /vmlinuz-2.6.17.13-vs2.0.2.1 ro root=/dev/VolGroup00/LogVol00 
         initrd /initrd-2.6.17.13-vs2.0.2.1.img

After installing it should look like this:

 # grub.conf generated by anaconda
 #
 # Note that you do not have to rerun grub after making changes to this file
 # NOTICE:  You have a /boot partition.  This means that
 #          all kernel and initrd paths are relative to /boot/, eg.
 #          root (hd0,0)
 #          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
 #          initrd /initrd-version.img
 #boot=/dev/sda
 default=0
 timeout=5
 splashimage=(hd0,0)/grub/splash.xpm.gz
 hiddenmenu
 title Red Hat Enterprise Linux AS (2.6.9-023stab032.1)
         root (hd0,0)
         kernel /vmlinuz-2.6.9-023stab032.1 ro root=/dev/VolGroup00/LogVol00
         initrd /initrd-2.6.9-023stab032.1.img
 title Red Hat Enterprise Linux AS (2.6.17.13-vs2.0.2.1)
         root (hd0,0)
         kernel /vmlinuz-2.6.17.13-vs2.0.2.1 ro root=/dev/VolGroup00/LogVol00
         initrd /initrd-2.6.17.13-vs2.0.2.1.img

And the default parameter should be set to 0 in purpose to make the OpenVZ kernel work by default after rebooting the machine.

Some more manual configuring needed on this step: Update the /etc/sysctl.conf file according to the following listing ( Comment all other settings that are not presented in listing below ):

 # On Hardware Node we generally need
 # packet forwarding enabled and proxy arp disabled
 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.proxy_arp = 0
 # Enables source route verification
 net.ipv4.conf.all.rp_filter = 1
 # Enables the magic-sysrq key
 kernel.sysrq = 1
 # TCP Explict Congestion Notification
 #net.ipv4.tcp_ecn = 0
 # we do not want all our interfaces to send redirects
 net.ipv4.conf.default.send_redirects = 1
 net.ipv4.conf.all.send_redirects = 0

As the SELinux should be disabled, put the following line to /etc/sysconfig/selinux:

 SELINUX=disabled

Now reboot the machine. After rebooting and loggin in you will see the following reply on vserver-stat call:

 # vserver-stat
 can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented
 #

It is a natural thing that now virtual environments v345 is unavailable. The following steps will be devoted to making them work over OpenVZ kernel.

3. Downloading and istalling vzctl package

OpenVZ solution reqires installing a set of tools: vzctl and vzquota packages. Let us download and install this tools:

 # wget http://openvz.org/download/utils/vzctl-3.0.13-1.i386.rpm
 # wget http://openvz.org/download/utils/vzquota-3.0.9-1.i386.rpm
 # rpm -Uhv vzctl-3.0.13-1.i386.rpm vzquota-3.0.9-1.i386.rpm

If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation. Then launch the OpenVZ:

 # /sbin/service vz start
 Starting OpenVZ:                                           [  OK  ]
 Bringing up interface venet0:                              [  OK  ]
 Configuring interface venet0:                              [  OK  ]
 #

Currently vzlist util is unable to find any VE:

 # vzlist
 VE not found
 #

4. Updating different configurations in purpose to make existing templates work

Move the existing templates of guest OSes to the right place:

 # cd /vz
 # mkdir private
 # mkdir 345
 # mv /vservers/v345 /vz/private/345

Now it is time for creating configuration files for OpenVZ VPS. Use the basic sample configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:

 # cd /etc/sysconfig/vz-scripts
 # cp ve-vps.basic.conf-sample 345.conf

Update the ON_BOOT string in 345.conf file by typing:

 .....
 ONBOOT="yes"
 .....

to make it boot on node restart, and add a couple of strings related to the particular 345 VE:

 .....
 VE_ROOT="/vz/root/345"
 VE_PRIVATE="/vz/private/345"
 ORIGIN_SAMPLE="vps.basic"
 HOSTNAME="test345.my.org"
 IP_ADDRESS="192.168.0.145"
 .....

And reboot the machine:

 # reboot

5. Testing how the guest OSes succesfully work over OpenVZ. reference to Users Guide of OpenVZ (vzctl).

After rebooting you will be able to see running VE 345 that have been migrated from vserver:

 # vzlist -a
 CTID      NPROC  STATUS  IP_ADDR         HOSTNAME
 345          5   running 192.168.0.145   test345.my.org
 #

And run commands on it:

 # vzctl exec 345 ls -l
 total 48
 drwxr-xr-x    2 root     root         4096 Oct 26  2004 bin
 drwxr-xr-x    3 root     root         4096 Dec 11 12:42 dev
 drwxr-xr-x   27 root     root         4096 Dec 11 12:44 etc
 -rw-r--r--    1 root     root            0 Dec 11 12:13 fastboot
 -rw-r--r--    1 root     root            0 Dec  8 15:33 halt
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 home
 drwxr-xr-x    7 root     root         4096 Oct 26  2004 lib
 drwxr-xr-x    2 root     root         4096 Jan 24  2003 mnt
 drwxr-xr-x    3 root     root         4096 Oct 26  2004 opt
 -rw-r--r--    1 root     root            0 Dec  7 20:17 poweroff
 dr-xr-xr-x   70 root     root            0 Dec 11 12:42 proc
 drwxr-x---    2 root     root         4096 Dec  7 20:17 root
 drwxr-xr-x    2 root     root         4096 Dec 11 12:13 sbin
 drwxrwxrwt    2 root     root         4096 Dec  8 12:40 tmp
 drwxr-xr-x   15 root     root         4096 Jul 27  2004 usr
 drwxr-xr-x   17 root     root         4096 Oct 26  2004 var
 #

Potential issues: This work has in progress status. Some issues may take place with tuning the network for containers. To be continued.