Latest revision |
Your text |
Line 113: |
Line 113: |
| === Updating different configurations === | | === Updating different configurations === |
| | | |
− | Get the existing guest OSs to the right place:
| + | Move the existing guest OSs to the right place: |
| | | |
| # cd /vz | | # cd /vz |
Line 119: |
Line 119: |
| # mkdir private/345 | | # mkdir private/345 |
| # mv /vservers/v345 /vz/private/345 | | # mv /vservers/v345 /vz/private/345 |
− | In Debian Lenny the path is /var/lib/vz/private/345 instead.
| |
− | In any case it is a good idea to have the guest file system in a dedicated partition or lvm container (shown in the example below) and just mount it there
| |
− | instead of moving:
| |
− | # mkdir /var/lib/vz/private/345
| |
− | # mount /dev/mapper/vg01-lvol5 /var/lib/vz/private/345
| |
| | | |
| Now it is time for creating configuration files for OpenVZ container. Use the basic sample | | Now it is time for creating configuration files for OpenVZ container. Use the basic sample |
Line 130: |
Line 125: |
| # cd /etc/sysconfig/vz-scripts | | # cd /etc/sysconfig/vz-scripts |
| # cp ve-vps.basic.conf-sample 345.conf | | # cp ve-vps.basic.conf-sample 345.conf |
− | In Debian Lenny the configuration is located in /etc/vz/conf/ , in this case type:
| |
− | # cd /etc/vz/conf
| |
− | # cp ve-vps.basic.conf-sample 345.conf
| |
| | | |
| Now, let's set some parameters for the new container. | | Now, let's set some parameters for the new container. |
| | | |
| First, we need to tell which distro the container is running: | | First, we need to tell which distro the container is running: |
− | # echo "OSTEMPLATE=\"fedora-core-4\"" >> 345.conf | + | # echo "OSTEMPLATE="fedora-core-4" >> 345.conf |
− | # echo "OSTEMPLATE=\"debian\"" >> 345.conf (for Debian Lenny)
| |
| | | |
| Then we set a few more parameters: | | Then we set a few more parameters: |
Line 179: |
Line 170: |
| == Issues == | | == Issues == |
| | | |
− | === Starting ===
| + | 1. The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container): |
| + | cd /etc/rcS.d |
| + | ln -s ../init.d/networking S40networking |
| | | |
− | If starting fails with a message '''Unable to start init, probably incorrect template''' either the /sbin/init file is missing in the guest file system, or just not executable. Or the guest file system is completely absent or dislocated. The proper path where you must place is specified in the vz.conf file, namely the parameter VE_PRIVATE. For Debian this file can be found in /etc/vz.
| + | 2. Disk space information is empty. Do the following to fix: |
− | | |
− | === Networking ===
| |
− | | |
− | ==== Starting networking in VEs ====
| |
− | | |
− | The vserver-originating containers do not initialize network at all. Thus one needs to use following command to enable networking start (inside of the migrated container):
| |
− | | |
− | update-rc.d networking defaults
| |
− | | |
− | ==== Migrating your VServer Shorewall setup ====
| |
− | | |
− | If you had the [http://www.shorewall.net/ Shorewall firewall] running on the hardware node to route traffic to and from your guests, here are a couple of advices, provided you want a networking setup close to what you had with Vserver (i.e. running <code>vnet</code> interfaces, not <code>veth</code> ones) :
| |
− | * do not use the <code>venet0</code> interface in Shorewall's configuration as the <code>vz</code> service starts after Shorewall (at least on Debian) and thus the interface does not exist when Shorewall starts. Do not use <code>detect</code> for the broadcast in <code>/etc/shorewall/interfaces</code>.
| |
− | * for your VEs to be able to talk to each other, use the <code>routeback</code> option for <code>venet0</code> (and others) in <code>/etc/shorewall/interfaces</code>.
| |
− | | |
− | ==== IP src from VEs ====
| |
− | | |
− | If you run a mail server in a VE, and if the hardware node has multiple network interfaces, you may have mail routing issues because of the originated IP address of the packets coming from the hardware node. Simply specify an interface in <code>/etc/vz/vz.conf</code> :
| |
− | <pre>VE_ROUTE_SRC_DEV="iface_name"</pre>
| |
− | | |
− | === Disk space information ===
| |
− | | |
− | Disk space information is empty. Do the following to fix: | |
| rm /etc/mtab | | rm /etc/mtab |
| ln -s /proc/mounts /etc/mtab | | ln -s /proc/mounts /etc/mtab |
| | | |
− | === /dev ===
| + | 3. Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container. |
− | | |
− | Vserver mounts /dev/pts filesystem for container transparently, whereas openvz does not. To compensate the ommission, you need to move aside /dev directory in the vserver-originating container and copy /dev directory from openvz based container. | |
− | | |
− | === Ubuntu udev ===
| |
− | | |
− | Additionally, Ubuntu based vservers have the udev package installed which prevents access to the console in openvz. This error message is an example of the problem:
| |
− | | |
− | # vzctl enter 345
| |
− | enter into CT 345 failed
| |
− | Unable to open pty: No such file or directory
| |
− | | |
− | The fix is to remove the udev package from the guest:
| |
− | | |
− | | |
− | # vzctl exec 345 'dpkg --force-depends --purge udev'
| |
− | (Reading database ... dpkg: udev: dependency problems, but removing anyway as you request:
| |
− | initramfs-tools depends on udev (>= 117-5).
| |
− | 15227 files and directories currently installed.)
| |
− | Removing udev ...
| |
− | Purging configuration files for udev ...
| |
− | dpkg - warning: while removing udev, directory `/lib/udev/devices/net' not empty so not removed.
| |
− | dpkg - warning: while removing udev, directory `/lib/udev/devices' not empty so not removed.
| |
− | | |
− | | |
− | Now restart the container, you should now be able to use the console.
| |
− | | |
− | | |
− | # vzctl restart 345
| |
− | Restarting container
| |
− | ...
| |
− | <SNIP>
| |
− | ...
| |
− | Container start in progress...
| |
− | | |
− | # vzctl enter 345
| |
− | entered into CT 345
| |
− | root@test:/#
| |
− | | |
− | === /proc ===
| |
− | | |
− | The /proc filesystem is not automatically mounted by openvz. So the vserver needs to mount it itself. The simplests (not the best) way it can be done, is by sticking following command at the end of /etc/init.d/bootmisc.sh:
| |
− | mount /proc
| |
| | | |
| [[Category:HOWTO]] | | [[Category:HOWTO]] |