Editing Migration from Linux-VServer to OpenVZ

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
 
{{Roughstub}}
 
{{Roughstub}}
  
This article describes the migration from Linux-VServer to OpenVZ.
+
Current document describes the migration from Linux-VServer based virtualization solution to OpenVZ.
 +
 
 +
== Description of challenge ==
 +
 
 +
The challenge is migration from Linux-Vserver to OpenVZ by booting the OpenVZ kernel and updating the existing configs of
 +
utility level in purpose to make the existing guest OSes work over OpenVZ kernel.
  
 
== Details of migration process ==
 
== Details of migration process ==
Line 113: Line 118:
 
=== Updating different configurations  ===
 
=== Updating different configurations  ===
  
Get the existing guest OSs to the right place:
+
Move the existing guest OSs to the right place:
  
 
   # cd /vz
 
   # cd /vz
Line 119: Line 124:
 
   # mkdir private/345
 
   # mkdir private/345
 
   # mv /vservers/v345 /vz/private/345
 
   # mv /vservers/v345 /vz/private/345
In Debian Lenny the path is /var/lib/vz/private/345 instead.
 
In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it there
 
instead of moving:
 
  # mkdir /var/lib/vz/private/345
 
  # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345
 
  
 
Now it is time for creating configuration files for OpenVZ container. Use the basic sample
 
Now it is time for creating configuration files for OpenVZ container. Use the basic sample
Line 130: Line 130:
 
   # cd /etc/sysconfig/vz-scripts
 
   # cd /etc/sysconfig/vz-scripts
 
   # cp ve-vps.basic.conf-sample 345.conf
 
   # cp ve-vps.basic.conf-sample 345.conf
In Debian Lenny the configuration is located in /etc/vz/conf/ , in this case type:
 
  # cd /etc/vz/conf
 
  # cp ve-vps.basic.conf-sample  345.conf
 
  
 
Now, let's set some parameters for the new container.
 
Now, let's set some parameters for the new container.
  
 
First, we need to tell which distro the container is running:
 
First, we need to tell which distro the container is running:
   # echo "OSTEMPLATE=\"fedora-core-4\"" >> 345.conf
+
   # echo "OSTEMPLATE="fedora-core-4" >> 345.conf
  # echo "OSTEMPLATE=\"debian\"" >> 345.conf   (for Debian Lenny)
 
  
 
Then we set a few more parameters:
 
Then we set a few more parameters:
Line 179: Line 175:
 
== Issues ==
 
== Issues ==
  
=== Starting ===
+
1. The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):
 
+
cd /etc/rcS.d
If starting fails with a message '''Unable to start init, probably incorrect template''' either the /sbin/init file is missing in the guest file system, or just not executable. Or the guest file system is completely absent or dislocated. The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz.
+
ln -s ../init.d/networking S40networking
 
 
=== Networking ===
 
 
 
==== Starting networking in VEs ====
 
 
 
The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):
 
 
 
update-rc.d networking defaults
 
 
 
==== Migrating your VServer Shorewall setup ====
 
 
 
If you had the [http://www.shorewall.net/ Shorewall firewall] running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with Vserver (i.e. running <code>vnet</code> interfaces, not <code>veth</code> ones) :
 
* do not use the <code>venet0</code> interface in Shorewall's configuration as the <code>vz</code> service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use <code>detect</code> for the broadcast in <code>/etc/shorewall/interfaces</code>.
 
* for your VEs to be able to talk to each other, use the <code>routeback</code> option for <code>venet0</code> (and others) in <code>/etc/shorewall/interfaces</code>.
 
 
 
==== IP src from VEs ====
 
  
If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in <code>/etc/vz/vz.conf</code> :
+
2. Disk space information is empty. Do the following to fix:
<pre>VE_ROUTE_SRC_DEV="iface_name"</pre>
 
 
 
=== Disk space information ===
 
 
 
Disk space information is empty. Do the following to fix:
 
 
  rm /etc/mtab
 
  rm /etc/mtab
 
  ln -s /proc/mounts /etc/mtab
 
  ln -s /proc/mounts /etc/mtab
 
=== /dev ===
 
 
Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container.
 
 
=== Ubuntu udev ===
 
 
Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:
 
 
# vzctl enter 345
 
enter into CT 345 failed
 
Unable to open pty: No such file or directory
 
 
The fix is to remove the udev package from the guest:
 
 
 
# vzctl exec 345 'dpkg --force-depends --purge udev'
 
(Reading database ... dpkg: udev: dependency problems, but removing anyway as you request:
 
  initramfs-tools depends on udev (>= 117-5).
 
15227 files and directories currently installed.)
 
Removing udev ...
 
Purging configuration files for udev ...
 
dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed.
 
dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.
 
 
 
Now restart the container, you should now be able to use the console.
 
 
 
# vzctl restart 345
 
Restarting container
 
...
 
  <SNIP>
 
...
 
Container start in progress...
 
 
# vzctl enter 345
 
entered into CT 345
 
root@test:/#
 
 
=== /proc ===
 
 
The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be done, is by sticking following command at the end of /etc/init.d/bootmisc.sh:
 
mount /proc
 
  
 
[[Category:HOWTO]]
 
[[Category:HOWTO]]

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)

Template used on this page: