Migration from one HN to another

From OpenVZ Virtuozzo Containers Wiki
Revision as of 08:11, 27 May 2012 by Kotakomputer (talk | contribs) (Summary)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Yellowpin.svg Note: this article is not formatted according to this Wiki standards. Please help reformatting it in a better way.

The vzmigrate script is used to migrate a container from one Hardware Node to another.

Summary[edit]

OLD SERVER:

[root@OpenVZ ~]# ssh-keygen -t rsa
[root@OpenVZ ~]# cd .ssh/
[root@OpenVZ .ssh]# scp id_rsa.pub root@10.1.5.6:./id_rsa.pub

NEW SERVER:

[root@Char ~]# cd .ssh/
[root@Char .ssh]# touch authorized_keys2
[root@Char .ssh]# chmod 600 authorized_keys2
[root@Char .ssh]# cat ../id_rsa.pub >> authorized_keys2
[root@Char .ssh]# rm ../id_rsa.pub
rm: remove regular file `../id_rsa.pub'? y

OLD SERVER: (test if we can ssh without password)

[root@OpenVZ .ssh]# ssh -2 -v root@10.1.5.6
[root@Char ~]# exit
[root@OpenVZ .ssh]# vzmigrate 10.1.5.6 101

Above example migrate VM 101 to 10.1.5.6. You can read detail explanation below:

Setting up SSH keys[edit]

You first have to setup SSH to permit the old HN to be able to login to the new HN without a password prompt. Run the following on the old HN.

[root@OpenVZ ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
74:7a:3e:7f:27:2f:42:bb:52:4c:ad:55:31:6f:79:f2 root@OpenVZ.ics.local
[root@OpenVZ ~]# cd .ssh/
[root@OpenVZ .ssh]# ls -al
total 20
drwx------  2 root root 4096 Aug 11 09:41 .
drwxr-x---  5 root root 4096 Aug 11 09:40 ..
-rw-------  1 root root  887 Aug 11 09:41 id_rsa
-rw-r--r--  1 root root  231 Aug 11 09:41 id_rsa.pub
[root@OpenVZ .ssh]# scp id_rsa.pub root@10.1.5.6:./id_rsa.pub
The authenticity of host '10.1.5.6 (10.1.5.6)' can't be established.
RSA key fingerprint is 3f:2a:26:15:e4:37:e2:06:b8:4d:20:ee:3a:dc:c1:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.5.6' (RSA) to the list of known hosts.
root@10.1.5.6's password:
id_rsa.pub               100%  231     0.2KB/s   00:00

Run the following on the new HN.

[root@Char ~]# cd .ssh/
[root@Char .ssh]# touch authorized_keys2
[root@Char .ssh]# chmod 600 authorized_keys2
[root@Char .ssh]# cat ../id_rsa.pub >> authorized_keys2
[root@Char .ssh]# rm ../id_rsa.pub
rm: remove regular file `../id_rsa.pub'? y

Run the following on the old HN.

[root@OpenVZ .ssh]# ssh -2 -v root@10.1.5.6
OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 10.1.5.6 [10.1.5.6] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_3.9p1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '10.1.5.6' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: gssapi-with-mic
debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: Next authentication method: publickey
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 149
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
Last login: Thu Aug  9 16:41:30 2007 from 10.1.5.20
[root@Char ~]# exit

Prerequisites[edit]

Make sure:

  • you have at least one good backup of the virtual machine you intend to migrate
  • rsync is installed on the target host
  • In general you cannot migrate from bigger kernel versions to smaller ones
  • By default, after the migration process is completed, the Container private area and configuration file are deleted on the old HN. However, if you wish the Container private area on the Source Node to not be removed after the successful Container migration, you can override the default vzmigrate behavior by using the –r no switch.

vzmigrate usage[edit]

Now that the vzmigrate script will function, a little bit on vzmigrate.

This program is used for container migration to another node
Usage:
vzmigrate [-r yes|no] [--ssh=<options>] [--keep-dst] [--online] [-v]
        destination_address <CTID>
Options:
-r, --remove-area yes|no
        Whether to remove container on source HN for successfully migrated container.
--ssh=<ssh options>
        Additional options that will be passed to ssh while establishing
        connection to destination HN. Please be careful with options
        passed, DO NOT pass destination hostname.
--keep-dst
        Do not clean synced destination container private area in case of some
        error. It makes sense to use this option on big container migration to
        avoid syncing container private area again in case some error
        (on container stop for example) occurs during first migration attempt.
--online
        Perform online (zero-downtime) migration: during the migration the
        container hangs for a while and after the migration it continues working
        as though nothing has happened.
-v
        Verbose mode. Causes vzmigrate to print debugging messages about
        its progress (including some time statistics).

Example[edit]

Here is an example of migrating container 101 from the current HN to one at 10.1.5.6:

[root@OpenVZ .ssh]# vzmigrate 10.1.5.6 101
OPT:10.1.5.6
Starting migration of container 101 on 10.1.5.6
Preparing remote node
Initializing remote quota
Syncing private
Syncing 2nd level quota
Turning quota off
Cleanup

Migrate all running containers[edit]

Here's a simple shell script that will migrate each container one after another. Just pass the destination host node as the single argument to the script. Feel free to add the -v flag to the vzmigrate flags if you'd like to see it execute with the verbose option:

for CT in $(vzlist -H -o veid); do vzmigrate --remove-area no --keep-dst $1 $CT; done


Additional Information[edit]

You can also use this guide to migrate from OpenVZ to Proxmox VE.

If you use Proxmox VE, you may also want to read how to Backup-Restore a virtual machine, be it OpenVZ or KVM.