Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

Physical to container

8,764 bytes added, 06:03, 26 October 2017
Prepare a new “empty” container
A rough description of how to migrate existing physical server into a [[VEcontainer]].
== Prepare a new “empty” VE Preparing to migrate ==For OpenVZ this would mean Stop most services on a machine to be migrated. “Most” means services such as web server, databases and the like — so you will not lose your data. Just leave the following bare minimum (assume including ssh daemon). To make things easier you chose VE ID of 123):may like to first follow the basic instructions elsewhere and create a dummy container based on the same Linux distribution you want to migrate. That way you can take that dummy as a template and then copy to your new migrated container and modify. You can later discard this dummy.
{{Note|Still better is to use this container from the same Linux distribution you want to migrate as the starting point for the new installation. In this case, if we are carefull to copy only the needed files from the original system, we will be able to skip many of the following steps.}}
 
== Prepare a new “empty” container ==
For OpenVZ this would mean the following (assume you chose CT ID of 123):
<source lang="bash">
mkdir /vz/root/123 /vz/private/123
cat /etc/vz/conf/ve-vps.basic.conf-sample > /etc/vz/conf/123.conf</source>
{{Note|Now comes the dummy container handy mentioned above: Simply copy the xxx.conf file of the dummy to your new yyy.conf and modify it.}}
== Preparing to migrate == Stop most services on {{Note|If you have created a machine to be migrated. “Most” means services such container from the same distro as web serverthe basis for the migration, databases simply take note of the CT ID and the like — so you will not lose your data. Just leave the bare minimum (including ssh daemon)skip this step.}}
== Copying the data ==
Copy all your data from the machine to an OpenVZ box. Say you'll be using VE container with ID of 123, then all the data should be placed to <code>/vz/private/123/</code> directory (so there will be directories such as <code>/vz/private/123/bin</code>, <code>etc</code>, <code>var</code> and so on). This could be done in several ways:
=== rsync ===
rsync example (run from On the new HN)create a file <code>/tmp/exclude.txt</code> with:<pre>/tmp/boot/lib/modules/etc/blkid/etc/mtab/etc/lvm/etc/fstab/etc/udev</pre> and run <b>rsync</b> as follows:<source lang="bash"> rsync -arvpz avz -H -X --numericone-ids file-system -exclude dev -numeric-exclude proc ids --exclude -from=/tmp /exclude.txt -e "ssh -l root@a.b.c.d:/ /vz/private/123/</source> {{Note|You should add the <code>-H</code> option, so hardlinks will be preserved during sync and also include the <code>-X</code> option to preserve file extended attributes}} If your source system have multiple partitions (for example <code>/var</code> or <code>/home</code>) repeat the command above for each partition in your system; for example:<source lang=" bash">rsync -avz -H -X --one-file-system --numeric-ids -e ssh root@a.b.c.d:/var/ /vz/private/123/var/</source>
'''Advantage:''' Your system doesn't really go down.
 
{{Note|To decrease the downtime, you can use double rsync approach. Run rsync for the first time before stopping most of the services, and then for the second time after stopping services. That way most of the data will be transferred while your server is fully working, and the second rsync will just "catch the latest changes" which is faster.}}
=== Live CD ===
/usr/src/*
Then create the tar. But remember, when the system is 'not' using udev, you have to look into /proc/ after creating your VE container because some devices might not exist. (/dev/ptmx or others)
# tar --numeric-owner -cjpf /tmp/mysystem.tar.bz2 / -X /tmp/excludes.excl
Naturally, you can only do this when the critical services (MySQL, apache, ..) are stopped and your /tmp filesystem is big enough to contain your tar.
'''Advantage:''' You don't need to boot from a live cd, so your system doesn't really go down.
== Setting VE container parameters ==
=== OSTEMPLATE ===
You have to add <code>OSTEMPLATE=xxx</code> line to <code>/etc/vz/conf/123.conf</code> file, where <code>xxx</code> would be distribution name (like <code>debian-3.0</code>) for vzctl to be able to make changes specific for this distribution.
 
{{Note|If you copied from the dummy container or are using it as basis for your migrated system then this step is already accomplished.}}
=== IP address(es) ===
Also, you have to supply an IP for a new VEcontainer:
vzctl set 123 --ipadd x.x.x.x --save
== Making adjustments ==
Since VE container is a bit different to a real physical server, you have to edit some files inside your new VEcontainer.
=== /etc/inittab ===
A VE container does not have real ttys, so you have to disable getty in <code>/etc/inittab</code> (i. e. <code>/vz/private/123/etc/inittab</code>). 
sed -i -e 's/^[0-9].*getty.*tty/d#&/g' /vz/private/123/etc/inittab
=== /etc/mtab ===
Link <code>/etc/mtab</code> to <code>/proc/mounts</code>, for <code>df</code> to work properly:
rm -f /vz/private/123/etc/mtab ln -s sf /proc/mounts /vz/private/123/etc/mtab
{{out|The problem here is VEcontainer's root filesystem (<code>/</code>) is mounted not from the VE container itself, but rather from the host system. That leaves <code>/etc/mtab</code> in VE container without a record for <code>/</code> being mounted, thus df doesn't show it. By linking <code>/etc/mtab → /proc/mounts</code> we make sure /etc/mtab shows what is really mounted in a VEcontainer.
Sure this is not the only way to fix df; you can just manually add a line to <code>/etc/mtab</code> telling <code>/</code> is mounted, and make sure this line will be there after a reboot.}}
=== /etc/fstab ===
Since you do not have any real disk partitions in a VEcontainer, /etc/fstab (or most part of it) is no longer needed. Empty it (excluding the line lines for <code>/dev/pts</code>, <code>/proc</code>, <code>/sys</code> and such):<source lang="bash"> mv /vz/private/123/etc/fstab /vz/private/123/etc/fstab.old egrep '/dev/pts|/dev/shm|/proc|/sys' /vz/private/123/etc/fstab.old > /vz/private/123/etc/fstab</source>
You can also mount a devpts in a running (but not fully functional) container: cp /vz/private/vzctl exec 123/etc/fstab /vz/private/123/etc/fstab.old grep mount -t devpts none /vz/private/123/etc/fstab.old > /vz/private/123/etcdev/fstabpts
You can also mount A still better approach would be simply to copy the <code>/etc/fstab</code> from a devpts in previously created container from a running (but not fully functional) VEtemplate of the same or similar distribution. In the case of RedHat/CentOS 5 this is:<source lang="bash">none /dev/pts devpts vzctl exec 123 mount -t devpts rw 0 0</source>and for RedHat/CentOS 6:<source lang="bash">none /dev/pts devpts rw,gid=5,mode=620 0 0</source>
=== /dev ===
{{Note| Once again if you are using the container from the same distro as basis, and you were carefull to not overwrite <code>/dev</code> with <b>rsync</b> by using the <code>--one-file-system</code> option, you can skip this section}}
==== Introduction: static /dev ====
In order for VE container to work, some nodes should be present in VEcontainer's <code>/dev</code><code></code>. For modern distributions, udev is taking care of it. For a variety of reasons udev doesn't make much sense in a VEcontainer, so the best thing to do is to disable udev and create needed device nodes manually.
Note that in some distributions <code>/dev</code> is mounted on <code>tmpfs</code> — this will not work in case of static <code>/dev</code>. So what you need to do is find out where <code>/dev</code> is being mounted on <code>tmpfs</code> and remove this. This is highly distribution-dependent; please add info for your distro here.
 
For Suse 11.0, It is found in /etc/init.d/boot
After you made sure your <code>/dev</code> is static, populate it with needed device nodes.
Please pay attention to the access permissions of the device files being created: a default file mode for newly created files is affected by <code>umask</code> ([[w:umask]]). You can use --mode option for <code>mknod</code> to set the desired permissions.
 
{{Note|Now comes the dummy container handy mentioned above: Simply copy the entire /dev directory of the dummy to your new migrated container - worked in my case at least with Debian Etch.}}
==== tty device nodes ====
In order for vzctl enter to work, a VE container needs to have some entries in /dev. This can either be /dev/ttyp* and /dev/ptyp*, or /dev/ptmx and mounted /dev/pts.
===== /dev/ptmx =====
Check that /dev/urandom exists. If it does not, create with:
mknod --mode 444 /vz/private/123/dev/urandom c 1 9
 
==== Using udev anyway ====
CentOS 5 can run in a container with udev enabled. You need to create /etc/udev/devices, containing the above device nodes. Also, the following will create the extra device nodes you need
mkdir /vz/private/123/etc/udev/devices
/sbin/MAKEDEV -d /vz/private/123/dev {p,t}ty{a,p}{0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,f} console core full kmem kmsg mem null port ptmx random urandom zero ram0
/sbin/MAKEDEV -d /vz/private/123/etc/udev/devices {p,t}ty{a,p}{0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,f} console core full kmem kmsg mem null port ptmx random urandom zero ram0
===/proc===
{{Note| One more time you may skip this if you are using a container created from a template of the same distro as your basis system.}}
 
Make sure the /proc directory exists:
ls -la /vz/private/123/ | grep proc
=== /etc/init.d services ===
Some system services can (or in some cases should) be disabledand/or uninstaled. A few good candidates are:
* acpid, amd (not needed)
* checkfs, checkroot (no filesystem checking is required in VEcontainer)* clock (no clock setting is required/allowed in VEcontainer)* consolefont (VE container does not have a console)* hdparm (VE container does not have real hard drives)
* klogd (unless you use iptables to LOG some packets)
* keymaps (VE container does not have a real keyboard)* kudzu (VE container does not have real hardware)* lm_sensors (VE container does not have access to hardware sensors)* microcodectl (VE container can not update CPU microcode)* netplugd (VE container does not have real Ethernet device)* irqbalance (this is handled in host node)* auditd ( not needed in container)* lvm2-monitor (no LVM in containers)* ntp/ntpd (clock taken from host node)
To see which services are enabled:
To disable the service:
* RedHat/Fedora/SUSE: <code>/sbin/chkconfig --del SERVICENAME off </code>
* Debian: <code>' update-rc.d -f hdparm remove '</code>
* Gentoo: <code>/sbin/rc-update del SERVICENAME</code>
==== Fedora/CentOS/Red Hat ====
Edit /vz/private/{VEIDCTID}/etc/sysconfig/network-scripts/ifcfg-eth''x''
Make the following look like this:
ONBOOT=no
 
If the files /vz/private/{CTID}/etc/sysconfig/network-scripts/ifdown-venet or
/vz/private/{CTID}/etc/sysconfig/network-scripts/ifup-venet exist, make sure they won't be used. These two files might exist if the physical server had OpenVZ installed. One way to do this is to rename them, like so:
mv ifdown-venet SKIP.ifdown-venet
 
Failing to do this will prevent networking from starting up correctly in the container.
==== Debian/Ubuntu ====
iface lo inet loopback
iface eth0 inet dhcpstatic
address 10.0.0.4
netmask 255.0.0.0
You can either comment out the eth* interface stanza(s), or take it out of the "auto" line(s).
 
===== Ubuntu server 8.x =====
 
Here what I have done for my Ubuntu server JEOS 8.04.2
 
<pre>
rm /vz/private/123/etc/network/if-up.d/ntpdate
rm /vz/private/123/etc/event.d/tty{1,2,3,4,5,6}
vzctl exec 123 update-rc.d -f klogd remove
vzctl exec 123 update-rc.d -f udev remove
 
</pre>
==== openSUSE/SLES ====
Use Yast.
 
=== Disable udev if you create DEVNODES devices ===
 
If you are creating devices for the container with a DEVNODES statement in a veid.conf file then these devices may be overwritten/deleted by udev when the container starts. As udev cannot "see" the device from within the container it disables it. Therefore, if you have DEVNODES statements in veid.conf then disable udev.
 
In Fedora, Redhat, Centos, try commenting out any '''udev''' entries in /vz/private/{CTID}/etc/rc.sysinit
Comment the line similar to this:
#[ -x /sbin/start_udev ] && /sbin/start_udev
=== Other adjustments ===
There might be other adjustments needed. Please add those here (just above this section) if you have more info.
== Starting a new VE container ==
Try to start your new VEcontainer:
vzctl start 123
Now check that everything works fine. If not, see [[#Troubleshooting]] below.
 
== Troubleshooting ==
=== Can't enter VE PHP not serving pages / random issues=== Make sure that /tmp and /var/tmp are created if you rsynced over your data and that they have proper permissions  mkdir tmp chmod 1777 tmp
=== Can't enter container === If you can not enter your VE container (using <code>vzctl enter</code>), you should be able to at least execute commands in it.
First, see the [[#tty device nodes]] section above.
vzctl exec 123 mount -t devpts none /dev/pts
Then, add the appropriate mount command to VEcontainer's startup scripts. On some distros, you need to have the appropriate line in VEcontainer's /etc/fstab.
In Fedora, try commenting out any '''udev''' entries in /vz/private/{VEIDCTID}/etc/rc.sysinit vi /vz/private/{VEIDCTID}/etc/rc.sysinit
Locate the '''udev''' entry from within vim
/udev
If anything goes wrong, try to find out why and fix. If you have enough Linux experience, it can be handled. Also check out IRC and please report back on this page.
== Success Stories Scripting ==- Debian 3.1 Sarge For CentOS below are two scripts to help with MySQL, apache2, PowerDNSthe migration:--[* [Userhttp:Stoffell|stoffell]] 08:41, 8 February 2007 (EST) //pastebin.com/ehf8G3H6 pre- Red Hat 7.2 with MySQL 3copy.23, apache, Chilisoft --[[User:Stoffell|stoffell]sh] 13:26, 9 February 2007 (EST)Does the necessary configuration required for the migration of a server/VM to a CT. - Gentoo with Courier, Postfix, MySQL, Apache2--* [[Userhttp:bfrackie|bfrackie]] 19:00, 18 March 2007 (EST) - AltLinux Master with qmail, MySQL, Apache, etc - to Debian/testing with OpenVZ /pastebin.com/thn0sezV post--[[Usercopy.sh]:alexkuklin|alexkuklin]] Performs steps 5 and 6.
- Centos 4.4 with apache2== Success stories =={{Note|please add your line to the bottom of this list, SVN, TRAC, etc and do not forget to sign it using <code><nowiki>--[[User:bitherder|bitherder]] ~~~~</nowiki></code>}}
* Debian 3.1 Sarge with MySQL, apache2, PowerDNS --[[User:Stoffell|stoffell]] 08:41, 8 February 2007 (EST)
* Red Hat 7.2 with MySQL 3.23, apache, Chilisoft --[[User:Stoffell|stoffell]] 13:26, 9 February 2007 (EST)
* Gentoo with Courier, Postfix, MySQL, Apache2 --[[User:bfrackie|bfrackie]] 19:00, 18 March 2007 (EST)
* AltLinux Master with qmail, MySQL, Apache, etc - to Debian/testing with OpenVZ --[[User:alexkuklin|alexkuklin]] 16:16, 23 March 2007 (EST)
* Centos 4.4 with apache2, SVN, TRAC, etc. --[[User:bitherder|bitherder]] 23:38, 26 February 2008 (EST)
* Centos 4.6 with apache2, Tomcat 5.0.x, postgresql, etc on CentOS 5.1 64bit Host --[[User:laslos|laslos]] 17:35, 10 March 2008 (EST)
* Debian Etch with apache2 etc... on CentOS 4.6 Host --[[User:laslos|laslos]] 19:46, 10 March 2008 (EST)
* Debian 1:3.3.5-13 with apache2, PHP, etc. --[[User:Spawrks|spawrks]] 23:36, 10 April 2008 (EST)
* Debian Etch with apache2, MySQL, etc. --[[User:Zhafrance|zhafrance]] 16:29, 20 April 2008 (EST)
* Debian Etch i386 with apache2, MySQL, etc. --[[User:geejay|geejay]] 17:29, 26 May 2008 (GMT)
* Centos 4.6 with apache2, MySQL, Qmail etc. --[[User:Bharathchari|Bharathchari]] 08:06, 13 June 2008 (EDT)
* Centos 4.6 with cPanel/WHM (Apache2, Mysql, Exim, etc) --[[User:Zccopwrx|Zccopwrx]] 08:16, 30 July 2008 (EDT)
* SlackWare 10.1 (Qmail) --[[User:defiancenl|defiancenl]]
* SlackWare 10.0 (Qmail) --[[User:defiancenl|defiancenl]]
* Ubuntu 8.04.3 LTS JEOS (Apache2, Mysql) --[[User:bougui|bougui]] Fri Aug 28 10:40:41 EDT 2009
* CentOS 5.3 (Apache2, Mysql, Cacti) --[[User:kofl|kofl]] September 12 2009
* Scientific Linux 3.0.9 (Macrovision FLEXlm) {{unsigned|137.226.90.94|11:34, 4 November 2009}}
* Red Hat Enterprise Linux 4 (rhel4) --[[User:Bpuklich|Bpuklich]] 17:20, 15 February 2010 (UTC)
* Debian SID up-to-date with apache2, MySQL, posgrey etc. --nyquist 14:04, 06 July 2010 (UTC)
* Centos 5.x with Plesk -- 05:33, 17 August 2010 (UTC)
* Redhat 4 -- 20:32, 18 August 2010 (UTC)
* Fedora 4 -- 15:06, 20 August 2010 (UTC)
* Fedora 9 x64 with FDS and samba PDC --burn 23:20 10 October 2010
* Fedora 3 x32 with Plesk -- 23 October 2010 --[[User:Rexwickham|Rex Wickham (2020media.com)]] 13:15, 23 October 2010 (UTC)
[[Category:HOWTO]]
* Debian 6 (Squeeze) with Lighttpd, MySQL, nfs, smb, etc. --[[Special:Contributions/95.21.175.189|95.21.175.189]] 22:39, 30 July 2011 (UTC)
* RedHat 9 (Shrike) with apache,nginx,mysql,qmail 09 August 2011 (UTC)
* Centos 5.6 with Postresql and JitterBit 24 August 2011
* Centos 4.9 with MySQL, Apache, ColdFusion, etc. 26 August 2011
* Centos 5.6 with MySQL, Apache, BIND, Postfix, Mono, etc. 26 August 2011
* Centos 5.7 with MySQL, Apache, Nginx, Memcached, Postfix, Openx, etc. --[[User:juranas|Juranas]] 18 November 2011
* RedHat Enterprise Linux 5 (rhel 5.6 - x86_64) 14:50, 18 November 2011
* Debian 6.0.4 with DTC Hosting Contro Panel . 15:00, 14 May 2012
* Debian 6, LAMP with ISPManager CP (no adjustments were made, just transferred the file structure and created ctid.conf) 03:19, 15 Jun 2012
* Debian 5.0.3, with Mysql, Apache, ISCP omega, Postfix, etc --[[Special:Contributions/91.143.222.253|91.143.222.253]] 19:47, 28 June 2012 (EDT)
* Debian 6.0.5 with artica-zarafa, 20 Nov 2012
Anonymous user