<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wfischer</id>
	<title>OpenVZ Virtuozzo Containers Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wfischer"/>
	<link rel="alternate" type="text/html" href="https://wiki.openvz.org/Special:Contributions/Wfischer"/>
	<updated>2026-04-09T16:47:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=3925</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=3925"/>
		<updated>2008-01-09T15:18:46Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added live-switchover&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines building the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Update:&amp;lt;/b&amp;gt; this howto currently does not describe details on OpenVZ Kernel 2.6.18, which contains DRBD version 8.*. Meanwhile, some hints on using OpenVZ Kernel 2.6.18 with DRBD 8 can be found in [http://forum.openvz.org/index.php?t=msg&amp;amp;th=3213&amp;amp;start=0&amp;amp; this thread in the forum].&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: [http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf (PDF, 145K)]&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only [http://www.thomas-krenn.com/en/service-support/knowledge-center/cluster/documentation.html available in German]: &lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig vz off&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to &amp;lt;code&amp;gt;/etc/ha.d/ha.cf&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to &amp;lt;code&amp;gt;/etc/ha.d/authkeys&amp;lt;/code&amp;gt; on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to &amp;lt;code&amp;gt;/etc/ha.d/haresources&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Note that it is not necessary to configure IPs for gratuitous arp here. The gratuitous arp is done by OpenVZ itself, through &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/ifup-venet&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/lib/vzctl/scripts/vps-functions&amp;lt;/code&amp;gt;. Below is an example for the haresources file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Before going in production: testing, testing, testing, and ...hm... testing! ==&lt;br /&gt;
&lt;br /&gt;
The installation of the cluster is finished at this point. Before putting the cluster in production it is very important to test the cluster. Because of all the possible different kinds of hardware that you may have, you may encounter problems when a failover is necessary. And as the cluster is about high availability, such problems must be found before the cluster is used for production.&lt;br /&gt;
&lt;br /&gt;
Here is one example: The e1000 driver that is included in kernels &amp;lt; 2.6.12 has a problem when a cable gets unplugged while broadcast packets are still being sent out on that interface. When using broadcast communication in Heartbeat on a crossover link, this fills up the transmit ring buffer on the adapter (the buffer is full after about 8 minutes after the cable got unplugged). Using unicast communication in Heartbeat fixes the problem for example. Details see: http://www.osdl.org/developer_bugzilla/show_bug.cgi?id=699#c22&lt;br /&gt;
&lt;br /&gt;
Without testing you may not be aware of such problems and may face them when the cluster is in production and a failover would be necessary. So test your cluster carefully!&lt;br /&gt;
&lt;br /&gt;
Possible tests can include:&lt;br /&gt;
* power outage test of active node&lt;br /&gt;
* power outage test of passive node&lt;br /&gt;
* network connection outage test of eth0 of active node&lt;br /&gt;
* network connection outage test of eth0 of passive node&lt;br /&gt;
* network connection outage test of crossover network connection&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
As mentioned above, some problems only arise after an outage lasts longer than some minutes. So do the tests also with a duration of &amp;gt;1h for example.&lt;br /&gt;
&lt;br /&gt;
Before you start to test, build a test plan. Some valueable information on that can be found in chapter 3 &amp;quot;Testing a highly available Tivoli Storage Manager cluster environment&amp;quot; of the Redbook ''IBM Tivoli Storage Manager in a Clustered Environment'', see http://www.redbooks.ibm.com/abstracts/sg246679.html. In this chapter it is mentioned that the experience of the authoring team is that the testing phase must be at least two times the total implementation time for the cluster.&lt;br /&gt;
&lt;br /&gt;
== Before installing kernel updates: testing again ==&lt;br /&gt;
&lt;br /&gt;
New OpenVZ kernel often include driver updates. This kernel for examples includes an update of the e1000 module: http://openvz.org/news/updates/kernel-022stab078.21&lt;br /&gt;
&lt;br /&gt;
To avoid to overlook problems with new components (such as a newer kernel), it is necessary to re-do the tests mentioned above. But as the cluster is already in production, a second cluster (test cluster) with the same hardware as the main cluster is needed. Use this test cluster to test updates of the kernel or main OS updates for the hardware node before putting them on the production cluster.&lt;br /&gt;
&lt;br /&gt;
I know this is not an easy task, as it is time-consuming and needs additional hardware only for testing. But when really business-critical applications are running on the cluster, it is very good to now that the cluster works fine also with new updates installed on the hardware node. In many cases a dedicated test cluster and the time efford for the testing of updates may cause too much costs. If you cannot do such test of updates, keep in mind that over time (when you must install security updates of the OS or the kernel) you have a cluster that you have not tested in this configuration.&lt;br /&gt;
&lt;br /&gt;
If you need a tested cluster (also with tested kernel updates), you may take a look on this Virtuozzo cluster: http://www.thomas-krenn.com/cluster&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig vz off&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Live-Switchover with the help of checkpointing ==&lt;br /&gt;
&lt;br /&gt;
With the help of [[Checkpointing_and_live_migration|checkpointing]] it is possible to do live switchovers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Important:&amp;lt;/b&amp;gt; although this HOWTO currently describes the use of DRBD 0.7, it is necessary to use DRBD 8 to be able to use this live-switchover feature reliable. Some hints on using OpenVZ Kernel 2.6.18 with DRBD 8 can be found in [http://forum.openvz.org/index.php?t=msg&amp;amp;th=3213&amp;amp;start=0&amp;amp; this thread in the forum].&lt;br /&gt;
&lt;br /&gt;
The following scripts are written by Thomas Kappelmueller. They should be placed at /root/live-switchover/ on both nodes. To activate the scripts execute the following commands on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /root/live-switchover/openvz /etc/init.d/&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /root/live-switchover/live_switchover.sh /root/bin/&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also necessary to replace &amp;lt;code&amp;gt;vz&amp;lt;/code&amp;gt; by an adjusted initscript (&amp;lt;code&amp;gt;openvz&amp;lt;/code&amp;gt; in this example). So /etc/ha.d/haresources has the following content on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 openvz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script cluster_freeze.sh ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#Script by Thomas Kappelmueller&lt;br /&gt;
#Version 1.0&lt;br /&gt;
LIVESWITCH_PATH='/vz/cluster/liveswitch'&lt;br /&gt;
&lt;br /&gt;
if [ -f $LIVESWITCH_PATH ]&lt;br /&gt;
then&lt;br /&gt;
        rm -f $LIVESWITCH_PATH&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
RUNNING_VE=$(vzlist -1)&lt;br /&gt;
&lt;br /&gt;
for I in $RUNNING_VE&lt;br /&gt;
do&lt;br /&gt;
        BOOTLINE=$(cat /etc/sysconfig/vz-scripts/$I.conf | grep -i &amp;quot;^onboot&amp;quot;)&lt;br /&gt;
        if [ $I != 1 -a &amp;quot;$BOOTLINE&amp;quot; = &amp;quot;ONBOOT=\&amp;quot;yes\&amp;quot;&amp;quot; ]&lt;br /&gt;
        then&lt;br /&gt;
                vzctl chkpnt $I&lt;br /&gt;
&lt;br /&gt;
                if [ $? -eq 0 ]&lt;br /&gt;
                then&lt;br /&gt;
                        vzctl set $I --onboot no --save&lt;br /&gt;
                        echo $I &amp;gt;&amp;gt; $LIVESWITCH_PATH&lt;br /&gt;
                fi&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script cluster_unfreeze.sh ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#Script by Thomas Kappelmueller&lt;br /&gt;
#Version 1.0&lt;br /&gt;
&lt;br /&gt;
LIVESWITCH_PATH='/vz/cluster/liveswitch'&lt;br /&gt;
&lt;br /&gt;
if [ -f $LIVESWITCH_PATH ]&lt;br /&gt;
then&lt;br /&gt;
        FROZEN_VE=$(cat $LIVESWITCH_PATH)&lt;br /&gt;
else&lt;br /&gt;
        exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
for I in $FROZEN_VE&lt;br /&gt;
do&lt;br /&gt;
        vzctl restore $I&lt;br /&gt;
&lt;br /&gt;
        if [ $? != 0 ]&lt;br /&gt;
        then&lt;br /&gt;
                vzctl start $I&lt;br /&gt;
        fi&lt;br /&gt;
&lt;br /&gt;
        vzctl set $I --onboot yes --save&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
rm -f $LIVESWITCH_PATH&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script live_switchover.sh ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#Script by Thomas Kappelmueller&lt;br /&gt;
#Version 1.0&lt;br /&gt;
&lt;br /&gt;
ps -eaf | grep 'vzctl enter' | grep -v 'grep' &amp;gt; /dev/null&lt;br /&gt;
if [ $? -eq 0 ]&lt;br /&gt;
then&lt;br /&gt;
  echo 'vzctl enter is active. please finish before live switchover.'&lt;br /&gt;
  exit 1&lt;br /&gt;
fi&lt;br /&gt;
ps -eaf | grep 'vzctl exec' | grep -v 'grep' &amp;gt; /dev/null&lt;br /&gt;
if [ $? -eq 0 ]&lt;br /&gt;
then&lt;br /&gt;
  echo 'vzctl exec is active. please finish before live switchover.'&lt;br /&gt;
  exit 1&lt;br /&gt;
fi&lt;br /&gt;
echo &amp;quot;Freezing VEs...&amp;quot;&lt;br /&gt;
/root/live-switchover/cluster_freeze.sh&lt;br /&gt;
echo &amp;quot;Starting Switchover...&amp;quot;&lt;br /&gt;
/usr/lib64/heartbeat/hb_standby&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script openvz ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# openvz        Startup script for OpenVZ&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
start() {&lt;br /&gt;
        /etc/init.d/vz start &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
        RETVAL=$?&lt;br /&gt;
        /root/live-switchover/cluster_unfreeze.sh&lt;br /&gt;
        return $RETVAL&lt;br /&gt;
}&lt;br /&gt;
stop() {&lt;br /&gt;
        /etc/init.d/vz stop &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
        RETVAL=$?&lt;br /&gt;
        return $RETVAL&lt;br /&gt;
}&lt;br /&gt;
status() {&lt;br /&gt;
        /etc/init.d/vz status &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
        RETVAL=$?&lt;br /&gt;
        return $RETVAL&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# See how we were called.&lt;br /&gt;
case &amp;quot;$1&amp;quot; in&lt;br /&gt;
  start)&lt;br /&gt;
        start&lt;br /&gt;
        ;;&lt;br /&gt;
  stop)&lt;br /&gt;
        stop&lt;br /&gt;
        ;;&lt;br /&gt;
  status)&lt;br /&gt;
        status&lt;br /&gt;
        ;;&lt;br /&gt;
  *)&lt;br /&gt;
        echo $&amp;quot;Usage: openvz {start|stop|status}&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
esac&lt;br /&gt;
&lt;br /&gt;
exit $RETVAL&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Partners&amp;diff=3765</id>
		<title>Partners</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Partners&amp;diff=3765"/>
		<updated>2007-12-14T12:17:14Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added Thomas-Krenn.AG&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some companies that are working together with OpenVZ project in one or another way. Feel free to add your company profile here ('''in alphabetical order''').&lt;br /&gt;
&lt;br /&gt;
== Computer Tyme ==&lt;br /&gt;
[http://ctyme.com Computer Tyme] is using OpenVZ for their spam filtering service [http://junkemailfilter.com Junk Email Filter dot com]. They are active proponents of OpenVZ; Marc Perkel worked at the OpenVZ booth during LinuxWorld Expo 2007.&lt;br /&gt;
&lt;br /&gt;
== LastSpam ==&lt;br /&gt;
[http://www.lastspam.com LastSpam] offers managed e-mail security services (anti-spam and anti-virus/bad content) and uses OpenVZ to deliver excellent service and support to its customers.  Ugo Bellavance, LastSpam's head server architect and manager is an active member of the OpenVZ community: helping on the forum, making suggestions, and submitting patches.&lt;br /&gt;
&lt;br /&gt;
== netVOICE communications ==&lt;br /&gt;
[http://www.netvoice.ca/ netVOICE communications] is an Internet Telephony Service Provider (ITSP) that uses OpenVZ as the basis for their [http://www.vpas.ca/ Virtual Private Asterisk Server (VPAS)] offering. They also offer consulting on running Asterisk (the leading Open Source telephony platform) on OpenVZ. netVOICE is a Digium Approved Reseller and has a Digium Certified Asterisk Professional [http://www.digium.com/en/training/certifications/ (dCAP)] on staff.&lt;br /&gt;
&lt;br /&gt;
== Proxmox ==&lt;br /&gt;
&lt;br /&gt;
[http://proxmox.com Proxmox] is [http://www.proxmox.com/cms_proxmox/en/virtualization/openvz/ using OpenVZ] for their spam filtering appliance (see [[Proxmox Mail Gateway in VE]]). Proxmox is also the author of [http://www.proxmox.com/cms_proxmox/en/virtualization/openvz/vzdump/ vzdump] utility.&lt;br /&gt;
&lt;br /&gt;
== SpiderTools.com ==&lt;br /&gt;
[http://spidertools.com SpiderTools] provides training for OpenVZ servers.  Students work on live servers to gain skills on how to implement OpenVZ.  20% of all sales go back to OpenVZ for development.  Students will get 6 weeks of live instruction and support.&lt;br /&gt;
&lt;br /&gt;
== Thomas-Krenn.AG ==&lt;br /&gt;
[http://www.thomas-krenn.com Thomas-Krenn.AG] is a server specialist, selling server systems and solutions. One of the solution products is a pre-installed [http://www.thomas-krenn.com/en/system-solutions/ha-linux-cluster.html cluster system], built with Virtuozzo. They published how to build such a cluster with OpenVZ at [[HA_cluster_with_DRBD_and_Heartbeat]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Control panels]]&lt;br /&gt;
* [[2006 contributions]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=3438</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=3438"/>
		<updated>2007-09-06T12:02:50Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: updated URL to german cluster documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines building the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Update:&amp;lt;/b&amp;gt; this howto currently does not describe details on OpenVZ Kernel 2.6.18, which contains DRBD version 8.*. Meanwhile, some hints on using OpenVZ Kernel 2.6.18 with DRBD 8 can be found in [http://forum.openvz.org/index.php?t=msg&amp;amp;th=3213&amp;amp;start=0&amp;amp; this thread in the forum].&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German: http://www.thomas-krenn.com/en/service-support/knowledge-center/cluster/documentation.html&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig vz off&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to &amp;lt;code&amp;gt;/etc/ha.d/ha.cf&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to &amp;lt;code&amp;gt;/etc/ha.d/authkeys&amp;lt;/code&amp;gt; on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to &amp;lt;code&amp;gt;/etc/ha.d/haresources&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Note that it is not necessary to configure IPs for gratuitous arp here. The gratuitous arp is done by OpenVZ itself, through &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/ifup-venet&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/lib/vzctl/scripts/vps-functions&amp;lt;/code&amp;gt;. Below is an example for the haresources file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Before going in production: testing, testing, testing, and ...hm... testing! ==&lt;br /&gt;
&lt;br /&gt;
The installation of the cluster is finished at this point. Before putting the cluster in production it is very important to test the cluster. Because of all the possible different kinds of hardware that you may have, you may encounter problems when a failover is necessary. And as the cluster is about high availability, such problems must be found before the cluster is used for production.&lt;br /&gt;
&lt;br /&gt;
Here is one example: The e1000 driver that is included in kernels &amp;lt; 2.6.12 has a problem when a cable gets unplugged while broadcast packets are still being sent out on that interface. When using broadcast communication in Heartbeat on a crossover link, this fills up the transmit ring buffer on the adapter (the buffer is full after about 8 minutes after the cable got unplugged). Using unicast communication in Heartbeat fixes the problem for example. Details see: http://www.osdl.org/developer_bugzilla/show_bug.cgi?id=699#c22&lt;br /&gt;
&lt;br /&gt;
Without testing you may not be aware of such problems and may face them when the cluster is in production and a failover would be necessary. So test your cluster carefully!&lt;br /&gt;
&lt;br /&gt;
Possible tests can include:&lt;br /&gt;
* power outage test of active node&lt;br /&gt;
* power outage test of passive node&lt;br /&gt;
* network connection outage test of eth0 of active node&lt;br /&gt;
* network connection outage test of eth0 of passive node&lt;br /&gt;
* network connection outage test of crossover network connection&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
As mentioned above, some problems only arise after an outage lasts longer than some minutes. So do the tests also with a duration of &amp;gt;1h for example.&lt;br /&gt;
&lt;br /&gt;
Before you start to test, build a test plan. Some valueable information on that can be found in chapter 3 &amp;quot;Testing a highly available Tivoli Storage Manager cluster environment&amp;quot; of the Redbook ''IBM Tivoli Storage Manager in a Clustered Environment'', see http://www.redbooks.ibm.com/abstracts/sg246679.html. In this chapter it is mentioned that the experience of the authoring team is that the testing phase must be at least two times the total implementation time for the cluster.&lt;br /&gt;
&lt;br /&gt;
== Before installing kernel updates: testing again ==&lt;br /&gt;
&lt;br /&gt;
New OpenVZ kernel often include driver updates. This kernel for examples includes an update of the e1000 module: http://openvz.org/news/updates/kernel-022stab078.21&lt;br /&gt;
&lt;br /&gt;
To avoid to overlook problems with new components (such as a newer kernel), it is necessary to re-do the tests mentioned above. But as the cluster is already in production, a second cluster (test cluster) with the same hardware as the main cluster is needed. Use this test cluster to test updates of the kernel or main OS updates for the hardware node before putting them on the production cluster.&lt;br /&gt;
&lt;br /&gt;
I know this is not an easy task, as it is time-consuming and needs additional hardware only for testing. But when really business-critical applications are running on the cluster, it is very good to now that the cluster works fine also with new updates installed on the hardware node. In many cases a dedicated test cluster and the time efford for the testing of updates may cause too much costs. If you cannot do such test of updates, keep in mind that over time (when you must install security updates of the OS or the kernel) you have a cluster that you have not tested in this configuration.&lt;br /&gt;
&lt;br /&gt;
If you need a tested cluster (also with tested kernel updates), you may take a look on this Virtuozzo cluster: http://www.thomas-krenn.com/cluster&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig vz off&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=3437</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=3437"/>
		<updated>2007-09-06T09:26:10Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added a hint on DRBD 8 which comes with OpenVZ 2.6.18 (added a link to a thread in the forum)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines building the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Update:&amp;lt;/b&amp;gt; this howto currently does not describe details on OpenVZ Kernel 2.6.18, which contains DRBD version 8.*. Meanwhile, some hints on using OpenVZ Kernel 2.6.18 with DRBD 8 can be found in [http://forum.openvz.org/index.php?t=msg&amp;amp;th=3213&amp;amp;start=0&amp;amp; this thread in the forum].&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig vz off&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to &amp;lt;code&amp;gt;/etc/ha.d/ha.cf&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to &amp;lt;code&amp;gt;/etc/ha.d/authkeys&amp;lt;/code&amp;gt; on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to &amp;lt;code&amp;gt;/etc/ha.d/haresources&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Note that it is not necessary to configure IPs for gratuitous arp here. The gratuitous arp is done by OpenVZ itself, through &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/ifup-venet&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/lib/vzctl/scripts/vps-functions&amp;lt;/code&amp;gt;. Below is an example for the haresources file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Before going in production: testing, testing, testing, and ...hm... testing! ==&lt;br /&gt;
&lt;br /&gt;
The installation of the cluster is finished at this point. Before putting the cluster in production it is very important to test the cluster. Because of all the possible different kinds of hardware that you may have, you may encounter problems when a failover is necessary. And as the cluster is about high availability, such problems must be found before the cluster is used for production.&lt;br /&gt;
&lt;br /&gt;
Here is one example: The e1000 driver that is included in kernels &amp;lt; 2.6.12 has a problem when a cable gets unplugged while broadcast packets are still being sent out on that interface. When using broadcast communication in Heartbeat on a crossover link, this fills up the transmit ring buffer on the adapter (the buffer is full after about 8 minutes after the cable got unplugged). Using unicast communication in Heartbeat fixes the problem for example. Details see: http://www.osdl.org/developer_bugzilla/show_bug.cgi?id=699#c22&lt;br /&gt;
&lt;br /&gt;
Without testing you may not be aware of such problems and may face them when the cluster is in production and a failover would be necessary. So test your cluster carefully!&lt;br /&gt;
&lt;br /&gt;
Possible tests can include:&lt;br /&gt;
* power outage test of active node&lt;br /&gt;
* power outage test of passive node&lt;br /&gt;
* network connection outage test of eth0 of active node&lt;br /&gt;
* network connection outage test of eth0 of passive node&lt;br /&gt;
* network connection outage test of crossover network connection&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
As mentioned above, some problems only arise after an outage lasts longer than some minutes. So do the tests also with a duration of &amp;gt;1h for example.&lt;br /&gt;
&lt;br /&gt;
Before you start to test, build a test plan. Some valueable information on that can be found in chapter 3 &amp;quot;Testing a highly available Tivoli Storage Manager cluster environment&amp;quot; of the Redbook ''IBM Tivoli Storage Manager in a Clustered Environment'', see http://www.redbooks.ibm.com/abstracts/sg246679.html. In this chapter it is mentioned that the experience of the authoring team is that the testing phase must be at least two times the total implementation time for the cluster.&lt;br /&gt;
&lt;br /&gt;
== Before installing kernel updates: testing again ==&lt;br /&gt;
&lt;br /&gt;
New OpenVZ kernel often include driver updates. This kernel for examples includes an update of the e1000 module: http://openvz.org/news/updates/kernel-022stab078.21&lt;br /&gt;
&lt;br /&gt;
To avoid to overlook problems with new components (such as a newer kernel), it is necessary to re-do the tests mentioned above. But as the cluster is already in production, a second cluster (test cluster) with the same hardware as the main cluster is needed. Use this test cluster to test updates of the kernel or main OS updates for the hardware node before putting them on the production cluster.&lt;br /&gt;
&lt;br /&gt;
I know this is not an easy task, as it is time-consuming and needs additional hardware only for testing. But when really business-critical applications are running on the cluster, it is very good to now that the cluster works fine also with new updates installed on the hardware node. In many cases a dedicated test cluster and the time efford for the testing of updates may cause too much costs. If you cannot do such test of updates, keep in mind that over time (when you must install security updates of the OS or the kernel) you have a cluster that you have not tested in this configuration.&lt;br /&gt;
&lt;br /&gt;
If you need a tested cluster (also with tested kernel updates), you may take a look on this Virtuozzo cluster: http://www.thomas-krenn.com/cluster&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig vz off&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_build&amp;diff=2771</id>
		<title>Kernel build</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_build&amp;diff=2771"/>
		<updated>2007-02-19T13:45:59Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: fixed info on modifying kernel configs, corrected ksubrelease (may not contain &amp;quot;-&amp;quot; character)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This FAQ will help you in case you want to apply some patches to the kernel on your own or rebuild it from sources.&lt;br /&gt;
On RPM based distros such as RedHat Enterprise Linux/CentOS, Fedora Core or SUSE one can simpy rebuild kernel from SRPM.&lt;br /&gt;
For other distros it is required to install sources, build and install kernel manually. The below are given the details for both cases.&lt;br /&gt;
&lt;br /&gt;
== Rebuilding kernel from SRPM ==&lt;br /&gt;
&lt;br /&gt;
'''Note''': some of the paths below include $TOPDIR, which is distribution-dependent and can be further redefined by user. To find out the proper location on your system, issue this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rpm --eval &amp;quot;%{_topdir}&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Download ===&lt;br /&gt;
Source RPMS for different OpenVZ kernel branches can be downloaded from http://openvz.org/download/kernel/. You can also access http://download.openvz.org/kernel/ directly, or use one of the [[download mirrors|mirrors]].&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
Install the downloaded SRC RPM with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# rpm -ihv ovzkernel-2.6.16-026test018.1.src.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After successfull installation, you can usually find kernel sources in &amp;lt;code&amp;gt;$TOPDIR/SOURCES/&amp;lt;/code&amp;gt;&lt;br /&gt;
and kernel spec file (&amp;lt;code&amp;gt;kernel-ovz.spec&amp;lt;/code&amp;gt;) in &amp;lt;code&amp;gt;$TOPDIR/SPECS&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Adding your own patches ===&lt;br /&gt;
To modify the kernel one needs just to add specific patches to the kernel spec file and put this patch into &amp;lt;code&amp;gt;$TOPDIR/SOURCES&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
Put your patch into SOURCES directory with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp &amp;lt;patch&amp;gt; $TOPDIR/SOURCES/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then open spec file &amp;lt;code&amp;gt;$TOPDIR/SPECS/kernel-ovz.spec&amp;lt;/code&amp;gt; in the editor and add the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Patch10000: &amp;lt;patch-name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%patch10000 -p1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
in appropriate places where similar text lines are.&lt;br /&gt;
&lt;br /&gt;
=== Adjust kernel version ===&lt;br /&gt;
Before rebuilding the kernel make sure that you adjusted the kernel version in &amp;lt;code&amp;gt;kernel-ovz.spec&amp;lt;/code&amp;gt;.&lt;br /&gt;
This will help you to distinguish binaries then from already existing kernels&lt;br /&gt;
(or from the official OpenVZ kernels). To do so, edit the &amp;lt;code&amp;gt;$TOPDIR/SPECS/kernel-ovz.spec&amp;lt;/code&amp;gt; file and replace the following line:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%define ksubrelease 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
with something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%define ksubrelease 1my.kernel.v1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Modifying configs ===&lt;br /&gt;
If you want to modify the kernel config, you need to do the following before you continue with the next step &amp;quot;Building RPMs&amp;quot;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cd $TOPDIR/SPECS&lt;br /&gt;
# rpmbuild -bp kernel-ovz.spec&lt;br /&gt;
# cd $TOPDIR/BUILD/kernel-2.6.16/linux-2.6.16&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
There you will find the configuration files in the subdirectory &amp;lt;code&amp;gt;config/*.config&amp;lt;/code&amp;gt;. Copy the one you want to modifiy to &amp;lt;code&amp;gt;$TOPDIR/BUILD/kernel-2.6.16/linux-2.6.16/.config&amp;lt;/code&amp;gt;. Then you can do a make menuconfig or something similar to adjust the kernel configuration. Afterwards copy &lt;br /&gt;
&amp;lt;code&amp;gt;$TOPDIR/BUILD/kernel-2.6.16/linux-2.6.16/.config&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;$TOPDIR/SOURCES&amp;lt;/code&amp;gt; directory, but use the corresponding file name in the target directory. &lt;br /&gt;
Some background information on this procedure can be found in the following thread:  http://www.arcknowledge.com/gmane.comp.audio.planetccrma.general/2004-11/msg00018.html&lt;br /&gt;
&lt;br /&gt;
=== Building RPMs ===&lt;br /&gt;
To rebuild the kernel, type the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cd $TOPDIR/SPECS&lt;br /&gt;
# rpmbuild -ba --target=i686 kernel-ovz.spec&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After successfull kernel compilation binary RPMs can be found at &amp;lt;code&amp;gt;$TOPDIR/RPMS/i686&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Rebuilding kernel from sources ==&lt;br /&gt;
&lt;br /&gt;
=== Download ===&lt;br /&gt;
To compile OpenVZ linux kernel one need to download the original linux kernel sources and OpenVZ patches for it.&lt;br /&gt;
&lt;br /&gt;
Linux kernel can be found at [http://www.kernel.org/ kernel.org], e.g. 2.6.16 kernel can be downloaded from [http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.16.tar.bz2 linux-2.6.16.tar.bz2].&lt;br /&gt;
&lt;br /&gt;
Appropriate OpenVZ patches for this kernel version can be found at http://openvz.org/download/kernel/&amp;lt;version&amp;gt;/patches/. For example, at the moment there is a patch [http://download.openvz.org/kernel/devel/026test018.1/patches/patch-026test018-combined.gz patch-026test018-combined.gz] available.&lt;br /&gt;
&lt;br /&gt;
Kernel configs are also available at OpenVZ download site. Most frequently SMP config is used, so let's download [http://download.openvz.org/kernel/devel/026test018.1/configs/kernel-2.6.16-026test018-i686-smp.config.ovz kernel-2.6.16-026test018-i686-smp.config.ovz]&lt;br /&gt;
for this example.&lt;br /&gt;
&lt;br /&gt;
=== Building ===&lt;br /&gt;
First, extract the kernel sources from archive:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# tar vjxf linux-2.6.16.tar.bz2&lt;br /&gt;
# cd linux-2.6.16&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apply OpenVZ patches to the kernel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# gzip -dc patch-026test018-combined.gz | patch -p1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to place the config and build the kernel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp kernel-2.6.16-026test018-i686-smp.config.ovz .config&lt;br /&gt;
# make oldconfig&lt;br /&gt;
# make&lt;br /&gt;
# make modules&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
After successfull build of kernel it can be installed on the machine with the following commands run under '''root''' user:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# make install&lt;br /&gt;
# make modules_install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also you need to edit your GRUB or LILO config to make your kernel available for boot.&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category:Kernel]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_build&amp;diff=2770</id>
		<title>Kernel build</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_build&amp;diff=2770"/>
		<updated>2007-02-19T11:27:37Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added info on modifying kernel configs when using source-RPM&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This FAQ will help you in case you want to apply some patches to the kernel on your own or rebuild it from sources.&lt;br /&gt;
On RPM based distros such as RedHat Enterprise Linux/CentOS, Fedora Core or SUSE one can simpy rebuild kernel from SRPM.&lt;br /&gt;
For other distros it is required to install sources, build and install kernel manually. The below are given the details for both cases.&lt;br /&gt;
&lt;br /&gt;
== Rebuilding kernel from SRPM ==&lt;br /&gt;
&lt;br /&gt;
'''Note''': some of the paths below include $TOPDIR, which is distribution-dependent and can be further redefined by user. To find out the proper location on your system, issue this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rpm --eval &amp;quot;%{_topdir}&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Download ===&lt;br /&gt;
Source RPMS for different OpenVZ kernel branches can be downloaded from http://openvz.org/download/kernel/. You can also access http://download.openvz.org/kernel/ directly, or use one of the [[download mirrors|mirrors]].&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
Install the downloaded SRC RPM with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# rpm -ihv ovzkernel-2.6.16-026test018.1.src.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After successfull installation, you can usually find kernel sources in &amp;lt;code&amp;gt;$TOPDIR/SOURCES/&amp;lt;/code&amp;gt;&lt;br /&gt;
and kernel spec file (&amp;lt;code&amp;gt;kernel-ovz.spec&amp;lt;/code&amp;gt;) in &amp;lt;code&amp;gt;$TOPDIR/SPECS&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Adding your own patches ===&lt;br /&gt;
To modify the kernel one needs just to add specific patches to the kernel spec file and put this patch into &amp;lt;code&amp;gt;$TOPDIR/SOURCES&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
Put your patch into SOURCES directory with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp &amp;lt;patch&amp;gt; $TOPDIR/SOURCES/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then open spec file &amp;lt;code&amp;gt;$TOPDIR/SPECS/kernel-ovz.spec&amp;lt;/code&amp;gt; in the editor and add the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Patch10000: &amp;lt;patch-name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%patch10000 -p1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
in appropriate places where similar text lines are.&lt;br /&gt;
&lt;br /&gt;
=== Modifying configs ===&lt;br /&gt;
If you want to modify kernel configs, you need to do the changes via the kernel spec file. Insert your modifications after the following part of the spec file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# now run oldconfig over all the config files&lt;br /&gt;
for i in *.config*&lt;br /&gt;
do&lt;br /&gt;
        mv $i .config&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add kernel settings for example in the following way:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        echo &amp;quot;YOUR_KERNEL_OPTION=m&amp;quot; &amp;gt;&amp;gt; .config&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you try to modify the config files directly (not through the spec file), running rpmbuild will fail with the following error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make[1]: *** [nonint_oldconfig] Error 1&lt;br /&gt;
make: *** [nonint_oldconfig] Error 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Some information on this can be found in the following thread:  http://www.redhat.com/archives/fedora-list/2005-October/msg00921.html&lt;br /&gt;
&lt;br /&gt;
=== Building RPMs ===&lt;br /&gt;
Before rebuilding the kernel make sure that you adjusted the kernel version in &amp;lt;code&amp;gt;kernel-ovz.spec&amp;lt;/code&amp;gt;.&lt;br /&gt;
This will help you to distinguish binaries then from already existing kernels&lt;br /&gt;
(or from the official OpenVZ kernels). To do so, edit the &amp;lt;code&amp;gt;$TOPDIR/SPECS/kernel-ovz.spec&amp;lt;/code&amp;gt; file and replace the following line:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%define ksubrelease 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
with something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%define ksubrelease 1-my.kernel.v1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To rebuild the kernel, type the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cd $TOPDIR/SPECS&lt;br /&gt;
# rpmbuild -ba --target=i686 kernel-ovz.spec&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After successfull kernel compilation binary RPMs can be found at &amp;lt;code&amp;gt;$TOPDIR/RPMS/i686&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Rebuilding kernel from sources ==&lt;br /&gt;
&lt;br /&gt;
=== Download ===&lt;br /&gt;
To compile OpenVZ linux kernel one need to download the original linux kernel sources and OpenVZ patches for it.&lt;br /&gt;
&lt;br /&gt;
Linux kernel can be found at [http://www.kernel.org/ kernel.org], e.g. 2.6.16 kernel can be downloaded from [http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.16.tar.bz2 linux-2.6.16.tar.bz2].&lt;br /&gt;
&lt;br /&gt;
Appropriate OpenVZ patches for this kernel version can be found at http://openvz.org/download/kernel/&amp;lt;version&amp;gt;/patches/. For example, at the moment there is a patch [http://download.openvz.org/kernel/devel/026test018.1/patches/patch-026test018-combined.gz patch-026test018-combined.gz] available.&lt;br /&gt;
&lt;br /&gt;
Kernel configs are also available at OpenVZ download site. Most frequently SMP config is used, so let's download [http://download.openvz.org/kernel/devel/026test018.1/configs/kernel-2.6.16-026test018-i686-smp.config.ovz kernel-2.6.16-026test018-i686-smp.config.ovz]&lt;br /&gt;
for this example.&lt;br /&gt;
&lt;br /&gt;
=== Building ===&lt;br /&gt;
First, extract the kernel sources from archive:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# tar vjxf linux-2.6.16.tar.bz2&lt;br /&gt;
# cd linux-2.6.16&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apply OpenVZ patches to the kernel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# gzip -dc patch-026test018-combined.gz | patch -p1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to place the config and build the kernel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp kernel-2.6.16-026test018-i686-smp.config.ovz .config&lt;br /&gt;
# make oldconfig&lt;br /&gt;
# make&lt;br /&gt;
# make modules&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
After successfull build of kernel it can be installed on the machine with the following commands run under '''root''' user:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# make install&lt;br /&gt;
# make modules_install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also you need to edit your GRUB or LILO config to make your kernel available for boot.&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category:Kernel]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2583</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2583"/>
		<updated>2006-12-13T09:59:29Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added a hint about gratuitous arp /* Setting up Heartbeat */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to &amp;lt;code&amp;gt;/etc/ha.d/ha.cf&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to &amp;lt;code&amp;gt;/etc/ha.d/authkeys&amp;lt;/code&amp;gt; on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to &amp;lt;code&amp;gt;/etc/ha.d/haresources&amp;lt;/code&amp;gt; on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Note that it is not necessary to configure IPs for gratuitous arp here. The gratuitous arp is done by OpenVZ itself, through &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/ifup-venet&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/lib/vzctl/scripts/vps-functions&amp;lt;/code&amp;gt;. Below is an example for the haresources file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Before going in production: testing, testing, testing, and ...hm... testing! ==&lt;br /&gt;
&lt;br /&gt;
The installation of the cluster is finished at this point. Before putting the cluster in production it is very important to test the cluster. Because of all the possible different kinds of hardware that you may have, you may encounter problems when a failover is necessary. And as the cluster is about high availability, such problems must be found before the cluster is used for production.&lt;br /&gt;
&lt;br /&gt;
Here is one example: The e1000 driver that is included in kernels &amp;lt; 2.6.12 has a problem when a cable gets unplugged while broadcast packets are still being sent out on that interface. When using broadcast communication in Heartbeat on a crossover link, this fills up the transmit ring buffer on the adapter (the buffer is full after about 8 minutes after the cable got unplugged). Using unicast communication in Heartbeat fixes the problem for example. Details see: http://www.osdl.org/developer_bugzilla/show_bug.cgi?id=699#c22&lt;br /&gt;
&lt;br /&gt;
Without testing you may not be aware of such problems and may face them when the cluster is in production and a failover would be necessary. So test your cluster carefully!&lt;br /&gt;
&lt;br /&gt;
Possible tests can include:&lt;br /&gt;
* power outage test of active node&lt;br /&gt;
* power outage test of passive node&lt;br /&gt;
* network connection outage test of eth0 of active node&lt;br /&gt;
* network connection outage test of eth0 of passive node&lt;br /&gt;
* network connection outage test of crossover network connection&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
As mentioned above, some problems only arise after an outage lasts longer than some minutes. So do the tests also with a duration of &amp;gt;1h for example.&lt;br /&gt;
&lt;br /&gt;
Before you start to test, build a test plan. Some valueable information on that can be found in chapter 3 &amp;quot;Testing a highly available Tivoli Storage Manager cluster environment&amp;quot; of the Redbook ''IBM Tivoli Storage Manager in a Clustered Environment'', see http://www.redbooks.ibm.com/abstracts/sg246679.html. In this chapter it is mentioned that the experience of the authoring team is that the testing phase must be at least two times the total implementation time for the cluster.&lt;br /&gt;
&lt;br /&gt;
== Before installing kernel updates: testing again ==&lt;br /&gt;
&lt;br /&gt;
New OpenVZ kernel often include driver updates. This kernel for examples includes an update of the e1000 module: http://openvz.org/news/updates/kernel-022stab078.21&lt;br /&gt;
&lt;br /&gt;
To avoid to overlook problems with new components (such as a newer kernel), it is necessary to re-do the tests mentioned above. But as the cluster is already in production, a second cluster (test cluster) with the same hardware as the main cluster is needed. Use this test cluster to test updates of the kernel or main OS updates for the hardware node before putting them on the production cluster.&lt;br /&gt;
&lt;br /&gt;
I know this is not an easy task, as it is time-consuming and needs additional hardware only for testing. But when really business-critical applications are running on the cluster, it is very good to now that the cluster works fine also with new updates installed on the hardware node. In many cases a dedicated test cluster and the time efford for the testing of updates may cause too much costs. If you cannot do such test of updates, keep in mind that over time (when you must install security updates of the OS or the kernel) you have a cluster that you have not tested in this configuration.&lt;br /&gt;
&lt;br /&gt;
If you need a tested cluster (also with tested kernel updates), you may take a look on this Virtuozzo cluster: http://www.thomas-krenn.com/cluster&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=FAQ&amp;diff=2582</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=FAQ&amp;diff=2582"/>
		<updated>2006-12-13T09:25:13Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: updated link /* What hardware is supported by OpenVZ kernel? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General ==&lt;br /&gt;
===== What is a Virtual Environment (Virtual Private Server, VPS, VE)? =====&lt;br /&gt;
:See [[VE]]&lt;br /&gt;
&lt;br /&gt;
===== Who needs OpenVZ? How it can be used? =====&lt;br /&gt;
:See [[Use cases]]&lt;br /&gt;
&lt;br /&gt;
===== How is OpenVZ different from other technologies? =====&lt;br /&gt;
:See [[Introduction to virtualization]]&lt;br /&gt;
&lt;br /&gt;
===== How is OpenVZ updated and why it is secure? =====&lt;br /&gt;
:See [[Security]]&lt;br /&gt;
&lt;br /&gt;
===== I want to show my appreciation to OpenVZ and put some logo to my site. Where to get it? =====&lt;br /&gt;
:See [[Artwork]]&lt;br /&gt;
&lt;br /&gt;
== Installation and upgrade ==&lt;br /&gt;
&lt;br /&gt;
===== What hardware is supported by OpenVZ kernel? =====&lt;br /&gt;
:See [http://www.swsoft.com/en/products/virtuozzo/hcl/ Virtuozzo HCL].&lt;br /&gt;
&lt;br /&gt;
===== Why there are different kernel flavours available and what do they mean? =====&lt;br /&gt;
:See [[Different kernel flavors (UP, SMP, ENTERPRISE, ENTNOSPLIT)]]&lt;br /&gt;
&lt;br /&gt;
===== How do I rebuild the kernel? =====&lt;br /&gt;
:See [[Kernel build]]&lt;br /&gt;
&lt;br /&gt;
===== What does 021stab018 in OpenVZ kernel version mean? =====&lt;br /&gt;
:See [[Kernel versioning]]&lt;br /&gt;
&lt;br /&gt;
===== How can I check package signatures? =====&lt;br /&gt;
:See [[Package signatures]]&lt;br /&gt;
&lt;br /&gt;
===== Is it possible to run x86 VPS on a x86_64 arch? =====&lt;br /&gt;
:Sure :) We actually did some work on that to enable migration of x86 VE from x86 to x86_64 and back, and to enable using 32-bit iptables in 32bit VE on an x86_64 system.&lt;br /&gt;
&lt;br /&gt;
===== What filesystems should I choose for saving my VEs? =====&lt;br /&gt;
The safest choice is ext2/3 to save your VEs. Remember that ReiserFS is less stable than ext2/3.&lt;br /&gt;
If you choose to use XFS, think about, that there is no support for disk quota inside VEs.&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
===== How do I set up VPN for a VE? =====&lt;br /&gt;
:See [[VPN via the TUN/TAP device]]&lt;br /&gt;
&lt;br /&gt;
===== What is veth and how do I use it? =====&lt;br /&gt;
:See [[Virtual Ethernet device]]&lt;br /&gt;
&lt;br /&gt;
== User Beancounters ==&lt;br /&gt;
&lt;br /&gt;
===== What are those User Beancounters? =====&lt;br /&gt;
See [[UBC]].&lt;br /&gt;
&lt;br /&gt;
===== What units are UBC parameters measured in? =====&lt;br /&gt;
See [[UBC parameter units]]&lt;br /&gt;
&lt;br /&gt;
===== How do I set up a VE which is able to get X Mb of RAM? ======&lt;br /&gt;
See [[Setting UBC parameters]].&lt;br /&gt;
&lt;br /&gt;
===== I can not start a program in VE: it reports out of memory. What do I do? =====&lt;br /&gt;
See [[Resource_shortage]].&lt;br /&gt;
&lt;br /&gt;
===== How can I reset &amp;lt;code&amp;gt;failcnt&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt;? =====&lt;br /&gt;
See [[UBC failcnt reset]].&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
===== My kernel crashed. What should I do? =====&lt;br /&gt;
:See [[When you have an oops]]&lt;br /&gt;
&lt;br /&gt;
===== I see a lot of processes in D state. What does that mean? =====&lt;br /&gt;
:See [[Processes in D state]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2397</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2397"/>
		<updated>2006-10-11T15:34:26Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: corrected a broken footnote /* Bofore going in production: testing, testing, testing, and ...hm... testing! */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bofore going in production: testing, testing, testing, and ...hm... testing! ==&lt;br /&gt;
&lt;br /&gt;
The installation of the cluster is finished at this point. Before putting the cluster in production it is very important to test the cluster. Because of all the possible different kinds of hardware that you may have, you may encounter problems when a failover is necessary. And as the cluster is about high availability, such problems must be found before the cluster is used for production.&lt;br /&gt;
&lt;br /&gt;
Here is one example: The e1000 driver that is included in kernels &amp;lt; 2.6.12 has a problem when a cable gets unplugged while broadcast packets are still being sent out on that interface. When using broadcast communication in Heartbeat on a crossover link, this fills up the transmit ring buffer on the adapter (the buffer is full after about 8 minutes after the cable got unplugged). Using unicast communication in Heartbeat fixes the problem for example. Details see: http://www.osdl.org/developer_bugzilla/show_bug.cgi?id=699#c22&lt;br /&gt;
&lt;br /&gt;
Without testing you may not be aware of such problems and may face them when the cluster is in production and a failover would be necessary. So test your cluster carefully!&lt;br /&gt;
&lt;br /&gt;
Possible tests can include:&lt;br /&gt;
* power outage test of active node&lt;br /&gt;
* power outage test of passive node&lt;br /&gt;
* network connection outage test of eth0 of active node&lt;br /&gt;
* network connection outage test of eth0 of passive node&lt;br /&gt;
* network connection outage test of crossover network connection&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
As mentioned above, some problems only arise after an outage lasts longer than some minutes. So do the tests also with a duration of &amp;gt;1h for example.&lt;br /&gt;
&lt;br /&gt;
Before you start to test, build a test plan. Some valueable information on that can be found in chapter 3 &amp;quot;Testing a highly available Tivoli Storage Manager cluster environment&amp;quot; of the Redbook ''IBM Tivoli Storage Manager in a Clustered Environment'', see http://www.redbooks.ibm.com/abstracts/sg246679.html. In this chapter it is mentioned that the experience of the authoring team is that the testing phase must be at least two times the total implementation time for the cluster.&lt;br /&gt;
&lt;br /&gt;
== Bofore installing kernel updates: testing again ==&lt;br /&gt;
&lt;br /&gt;
New OpenVZ kernel often include driver updates. This kernel for examples includes an update of the e1000 module: http://openvz.org/news/updates/kernel-022stab078.21&lt;br /&gt;
&lt;br /&gt;
To avoid to overlook problems with new components (such as a newer kernel), it is necessary to re-do the tests mentioned above. But as the cluster is already in production, a second cluster (test cluster) with the same hardware as the main cluster is needed. Use this test cluster to test updates of the kernel or main OS updates for the hardware node before putting them on the production cluster.&lt;br /&gt;
&lt;br /&gt;
I know this is not an easy task, as it is time-consuming and needs additional hardware only for testing. But when really business-critical applications are running on the cluster, it is very good to now that the cluster works fine also with new updates installed on the hardware node. In many cases a dedicated test cluster and the time efford for the testing of updates may cause too much costs. If you cannot do such test of updates, keep in mind that over time (when you must install security updates of the OS or the kernel) you have a cluster that you have not tested in this configuration.&lt;br /&gt;
&lt;br /&gt;
If you need a tested cluster (also with tested kernel updates), you may take a look on this Virtuozzo cluster: http://www.thomas-krenn.com/cluster&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2396</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2396"/>
		<updated>2006-10-11T15:31:15Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added 2 chapters on testing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 drbddisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bofore going in production: testing, testing, testing, and ...hm... testing! ==&lt;br /&gt;
&lt;br /&gt;
The installation of the cluster is finished at this point. Before putting the cluster in production it is very important to test the cluster. Because of all the possible different kinds of hardware that you may have, you may encounter problems when a failover is necessary. And as the cluster is about high availability, such problems must be found before the cluster is used for production.&lt;br /&gt;
&lt;br /&gt;
Here is one example: The e1000 driver that is included in kernels &amp;lt; 2.6.12 has a problem when a cable gets unplugged while broadcast packets are still being sent out on that interface. When using broadcast communication in Heartbeat on a crossover link, this fills up the transmit ring buffer on the adapter (the buffer is full after about 8 minutes after the cable got unplugged). Using unicast communication in Heartbeat fixes the problem for example. Details see: http://www.osdl.org/developer_bugzilla/show_bug.cgi?id=699#c22&lt;br /&gt;
&lt;br /&gt;
Without testing you may not be aware of such problems and may face them when the cluster is in production and a failover would be necessary. So test your cluster carefully!&lt;br /&gt;
&lt;br /&gt;
Possible tests can include:&lt;br /&gt;
* power outage test of active node&lt;br /&gt;
* power outage test of passive node&lt;br /&gt;
* network connection outage test of eth0 of active node&lt;br /&gt;
* network connection outage test of eth0 of passive node&lt;br /&gt;
* network connection outage test of crossover network connection&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
As mentioned above, some problems only arise after an outage lasts longer than some minutes. So do the tests also with a duration of &amp;gt;1h for example.&lt;br /&gt;
&lt;br /&gt;
Before you start to test, build a test plan. Some valueable information on that can be found in chapter 3 &amp;quot;Testing a highly available Tivoli Storage Manager cluster environment&amp;quot; of the Redbook ''IBM Tivoli Storage Manager in a Clustered Environment''&amp;lt;ref&amp;gt;http://www.redbooks.ibm.com/abstracts/sg246679.html&amp;lt;/ref&amp;gt;. In this chapter it is mentioned that the experience of the authoring team is that the testing phase must be at least two times the total implementation time for the cluster.&lt;br /&gt;
&lt;br /&gt;
== Bofore installing kernel updates: testing again ==&lt;br /&gt;
&lt;br /&gt;
New OpenVZ kernel often include driver updates. This kernel for examples includes an update of the e1000 module: http://openvz.org/news/updates/kernel-022stab078.21&lt;br /&gt;
&lt;br /&gt;
To avoid to overlook problems with new components (such as a newer kernel), it is necessary to re-do the tests mentioned above. But as the cluster is already in production, a second cluster (test cluster) with the same hardware as the main cluster is needed. Use this test cluster to test updates of the kernel or main OS updates for the hardware node before putting them on the production cluster.&lt;br /&gt;
&lt;br /&gt;
I know this is not an easy task, as it is time-consuming and needs additional hardware only for testing. But when really business-critical applications are running on the cluster, it is very good to now that the cluster works fine also with new updates installed on the hardware node. In many cases a dedicated test cluster and the time efford for the testing of updates may cause too much costs. If you cannot do such test of updates, keep in mind that over time (when you must install security updates of the OS or the kernel) you have a cluster that you have not tested in this configuration.&lt;br /&gt;
&lt;br /&gt;
If you need a tested cluster (also with tested kernel updates), you may take a look on this Virtuozzo cluster: http://www.thomas-krenn.com/cluster&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=User:Wfischer&amp;diff=2352</id>
		<title>User:Wfischer</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=User:Wfischer&amp;diff=2352"/>
		<updated>2006-09-27T14:09:00Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hi, my name is Werner Fischer. I am working for Thomas-Krenn.AG in the development team for a bundled cluster solution based on Heartbeat, DRBD, and Virtuozzo.&lt;br /&gt;
&lt;br /&gt;
Private website (includes publications and talks): [http://www.wefi.net www.wefi.net]&lt;br /&gt;
&lt;br /&gt;
Company website: [http://www.thomas-krenn.com www.thomas-krenn.com]&lt;br /&gt;
&lt;br /&gt;
Company website (cluster information): [http://www.thomas-krenn.com/cluster www.thomas-krenn.com/cluster]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2351</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2351"/>
		<updated>2006-09-22T10:20:48Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: fixed two typos&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any application that is not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exactly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2350</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2350"/>
		<updated>2006-09-22T10:15:24Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: /* Copy necessary OpenVZ files to DRBD device */ fixed one link (/etc/vz) and reworded the steps more clearly&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /vz /vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir /vz&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links '''(do this on both nodes)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/vz /etc/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/vz /etc/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it '''(only on ovz-node1!)''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /vz&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz.orig/* /vz/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /vz/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/vz /vz/cluster/etc/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /vz/cluster/etc/sysconfig/&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /vz/cluster/var/&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2343</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2343"/>
		<updated>2006-09-19T12:50:14Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: added a hint about additional infos on virtuozzo clustering&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
Some other additional information can be found in the documentation of the Thomas-Krenn.AG cluster (The author of this howto is working in the cluster development there, that is the reason why he was able to write this howto :-). The full documentation with interesting illustrations is currently only available in German:&lt;br /&gt;
http://my.thomas-krenn.com/service_support/index.php/page.242&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2335</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=2335"/>
		<updated>2006-09-18T07:03:46Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: corrected minor typos and hints&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get the tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, and heartbeat-stonith-1.2.4-1.i386.rpm are needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installation contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below show the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1954</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1954"/>
		<updated>2006-08-04T09:29:04Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: /* How to do updates of vzctl, vzctl-lib, and vzquota */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm is needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installed contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below should the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
Ensure after every update of OpenVZ tools that OpenVZ is not started on system boot. To disable starting of OpenVZ on system boot execute on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1953</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1953"/>
		<updated>2006-08-04T09:24:39Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: Added info to disable starting of OpenVZ on system boot /* Installing OpenVZ */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm is needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default. Disable starting of OpenVZ on system boot on both nodes (OpenVZ will be started and stopped by Heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# chkconfig --del vz&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installed contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below should the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1952</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1952"/>
		<updated>2006-08-04T09:20:45Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: Added an info about kernel-devel version, corrected make rpm command /* Compiling DRBD userspace tools */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm is needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel). If the kernel-devel package is not the same version as the kernel package that is currently running, it is possible to execute 'make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/' to directly point to the kernel sources.&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default and reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installed contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below should the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1951</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1951"/>
		<updated>2006-08-04T08:56:36Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: /* How to do OpenVZ kernel updates when it contains a new DRBD version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm is needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel).&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default and reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As mentioned above, it is important to use the correct version of the DRBD userspace tools. When an OpenVZ kernel contains a new DRBD version, it is important that the DRBD API version of the userspace tools matches the API version of the DRBD module that is included in the OpenVZ kernel. The API versions can be found at http://svn.drbd.org/drbd/branches/drbd-0.7/ChangeLog. The best way is to always use the version of the DRBD userspace tools that matches the version of the DRBD module that is included in the OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
In this example the initial cluster installed contained OpenVZ kernel 2.6.8-022stab078.10, which contains the DRBD module 0.7.17. The steps below should the update procedure to OpenVZ kernel 2.6.8-022stab078.14, which contains the DRBD module 0.7.20.&lt;br /&gt;
In the first step build the DRBD userspace tools version 0.7.20 on your buildmachine. Then stop Heartbeat and DRBD on the passive node (hint: you can use 'cat /proc/drbd' to get a hint which node is active and which one is passive):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/heartbeat stop&lt;br /&gt;
Stopping High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:60 nr:136 dw:196 dr:97 al:3 bm:3 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]# /etc/init.d/drbd stop&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
cat: /proc/drbd: No such file or directory&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then install the new kernel and the DRBD userspace tools on this node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# rpm -ihv ovzkernel-2.6.8-022stab078.14.i686.rpm&lt;br /&gt;
warning: ovzkernel-2.6.8-022stab078.14.i686.rpm: V3 DSA signature: NOKEY, key ID a7a1d4b6&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:ovzkernel              ########################################### [100%]&lt;br /&gt;
[root@ovz-node2 ~]# rpm -Uhv drbd-0.7.20-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
/sbin/service&lt;br /&gt;
Stopping all DRBD resources.&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now set the new kernel as default kernel in /etc/grub.conf and then reboot this node.&lt;br /&gt;
&lt;br /&gt;
After the reboot, the new DRBD version is visible:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node2 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.20 (api:79/proto:74)&lt;br /&gt;
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57&lt;br /&gt;
 0: cs:Connected st:Secondary/Primary ld:Consistent&lt;br /&gt;
    ns:0 nr:28 dw:28 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node2 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To update the other node, switch-over the services to make the current active node the passive node. Execute the following on the still active node (it could be that the hb_standby command is located in /usr/lib/heartbeat):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /usr/lib64/heartbeat/hb_standby&lt;br /&gt;
2006/08/03_21:09:41 Going standby [all].&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now do the same steps on the new passive node to update it: stop Heartbeat and DRBD, install the new kernel and the new DRBD userspace tools, set the new kernel as default kernel in /etc/grub.conf and reboot the node.&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1950</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1950"/>
		<updated>2006-08-04T08:54:14Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: /* Setting up Heartbeat */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm is needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel).&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes, as described in [[quick installation]]. Update grub configuration to use the OpenVZ kernel by default and reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
Install the neccessary Heartbeat rpms on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv heartbeat-1.2.4-1.i386.rpm heartbeat-pils-1.2.4-1.i386.rpm heartbeat-stonith-1.2.4-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:heartbeat-pils         ########################################### [ 33%]&lt;br /&gt;
   2:heartbeat-stonith      ########################################### [ 67%]&lt;br /&gt;
   3:heartbeat              ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes. Details about this file can be found at http://www.linux-ha.org/ha.cf. Below is an example configuration which uses the two network connections and also a serial connection for heartbeat packets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Heartbeat logging configuration&lt;br /&gt;
logfacility daemon&lt;br /&gt;
&lt;br /&gt;
# Heartbeat cluster members&lt;br /&gt;
node ovz-node1&lt;br /&gt;
node ovz-node2&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication timing&lt;br /&gt;
keepalive 1&lt;br /&gt;
warntime 10&lt;br /&gt;
deadtime 30&lt;br /&gt;
initdead 120&lt;br /&gt;
&lt;br /&gt;
# Heartbeat communication paths&lt;br /&gt;
udpport 694&lt;br /&gt;
ucast eth1 192.168.255.1&lt;br /&gt;
ucast eth1 192.168.255.2&lt;br /&gt;
ucast eth0 192.168.1.201&lt;br /&gt;
ucast eth0 192.168.1.202&lt;br /&gt;
baud 19200&lt;br /&gt;
serial /dev/ttyS0&lt;br /&gt;
&lt;br /&gt;
# Don't fail back automatically&lt;br /&gt;
auto_failback off&lt;br /&gt;
&lt;br /&gt;
# Monitoring of network connection to default gateway&lt;br /&gt;
ping 192.168.1.1&lt;br /&gt;
respawn hacluster /usr/lib64/heartbeat/ipfail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes. Set the permissions of this file to 600. Details about this file can be found at http://www.linux-ha.org/authkeys. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth 1&lt;br /&gt;
1 sha1 PutYourSuperSecretKeyHere&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes. Details about this file can be found at http://www.linux-ha.org/haresources. Below is an example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ovz-node1 datadisk::r0 Filesystem::/dev/drbd0::/vz::ext3 vz MailTo::youremail@yourdomain.tld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Finally, you can now start heartbeat on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/heartbeat start&lt;br /&gt;
Starting High-Availability services:&lt;br /&gt;
                                                           [  OK  ]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1912</id>
		<title>HA cluster with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=HA_cluster_with_DRBD_and_Heartbeat&amp;diff=1912"/>
		<updated>2006-07-30T18:17:53Z</updated>

		<summary type="html">&lt;p&gt;Wfischer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article shows how to setup a OpenVZ high availability (HA) cluster using the data replication software DRBD and the cluster manager Heartbeat. In this example the two machines builing the cluster run on CentOS 4.3. The article also shows how to do kernel updates in the cluster, including necessary steps like recompiling of new DRBD userspace tools. For this purpose, kernel 2.6.8-022stab078.10 (containing DRBD module 0.7.17) is used as initial kernel version, and kernel 2.6.8-022stab078.14 (containing DRBD module 0.7.20) as updated kernel version.&lt;br /&gt;
&lt;br /&gt;
Additional information about clustering of virtual machines can be found in the following paper: http://www.linuxtag.org/2006/fileadmin/linuxtag/dvd/12080-paper.pdf&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
The OpenVZ kernel already includes the DRBD module. The DRBD userspace tools and the cluster manager Heartbeat must be provided seperately. As the API version of the DRBD userspace tools must exactly match the API version of the module, compile them yourself. Also compile Heartbeat yourself, as at the time of this writing the CentOS extras repository only contained an old CVS version of Heartbeat.&lt;br /&gt;
&lt;br /&gt;
On a hardware node for production use there should not be any applications that are not really needed for running OpenVZ (any things which are not needed by OpenVZ should run in a VE for security reasons). As a result, compile DRBD and Heartbeat on another machine running CentOS 4.3 (in this example I used a virtual machine on a VMware Server).&lt;br /&gt;
&lt;br /&gt;
=== Compiling Heartbeat ===&lt;br /&gt;
Heartbeat version 1.2.* has successfully been used in a lot of two-node-clusters around the world. As the codebase used in version 1.2.* is in production use for many years now, the code is very stable. At the time of writing, Heartbeat version 1.2.4 is the current version of the 1.2.* branch.&lt;br /&gt;
&lt;br /&gt;
Get tar.gz of the current version of the 1.2.* branch from http://linux-ha.org/download/index.html, at the time of this writing this is http://linux-ha.org/download/heartbeat-1.2.4.tar.gz. Use rpmbuild to build the package:&lt;br /&gt;
&amp;lt;pre&amp;gt;rpmbuild -ta heartbeat-1.2.4.tar.gz&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you find four rpm packes in /usr/src/redhat/RPMS/i386 (heartbeat-1.2.4-1.i386.rpm, heartbeat-ldirectord-1.2.4-1.i386.rpm, heartbeat-pils-1.2.4-1.i386.rpm, heartbeat-stonith-1.2.4-1.i386.rpm). In this example only heartbeat-1.2.4-1.i386.rpm is needed.&lt;br /&gt;
&lt;br /&gt;
=== Compiling DRBD userspace tools ===&lt;br /&gt;
When compiling the DRBD userspace tools, you have to take care to take the version that matches the DRBD version that is included in the OpenVZ kernel you want to use. If you are unsure about the version, do the following steps while running the OpenVZ kernel that you want to use on a test machine (I used another virtual machine on a VMware server to try this):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@testmachine ~]# cat /proc/version&lt;br /&gt;
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006&lt;br /&gt;
[root@testmachine ~]# modprobe drbd&lt;br /&gt;
[root@testmachine ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Unconfigured&lt;br /&gt;
 1: cs:Unconfigured&lt;br /&gt;
[root@testmachine ~]# rmmod drbd&lt;br /&gt;
[root@testmachine ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here the version of the DRBD module is 0.7.17. So the userspace tools for 0.7.17 are neccessary.&lt;br /&gt;
&lt;br /&gt;
Back on the buildmachine, do the following to create the rpm:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@buildmachine ~]# yum install kernel-devel gcc bison flex&lt;br /&gt;
Setting up Install Process&lt;br /&gt;
Setting up repositories&lt;br /&gt;
Reading repository metadata in from local files&lt;br /&gt;
Parsing package install arguments&lt;br /&gt;
Nothing to do&lt;br /&gt;
[root@buildmachine ~]# tar xfz drbd-0.7.17.tar.gz&lt;br /&gt;
[root@buildmachine ~]# cd drbd-0.7.17&lt;br /&gt;
[root@buildmachine drbd-0.7.17]# make rpm KDIR=/usr/src/kernels/2.6.9-34.0.2.EL-i686/&lt;br /&gt;
[...]&lt;br /&gt;
You have now:&lt;br /&gt;
-rw-r--r--  1 root root 288728 Jul 30 10:40 dist/RPMS/i386/drbd-0.7.17-1.i386.rpm&lt;br /&gt;
-rw-r--r--  1 root root 518369 Jul 30 10:40 dist/RPMS/i386/drbd-km-2.6.9_34.0.2.EL-0.7.17-1.i386.rpm&lt;br /&gt;
[root@buildmachine drbd-0.7.17]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that in this way the kernel-devel from CentOS is used, but this does not matter as the created drbd-km rpm will not be used (the DRBD kernel module is already included in OpenVZ kernel).&lt;br /&gt;
&lt;br /&gt;
== Installing the two nodes ==&lt;br /&gt;
Install the two machines in the same way as you would install them for a normal OpenVZ installation, but do not create a filesystem for the /vz. This filesystem will be installed later on on top of DRBD.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+'''Example installation configuration'''&lt;br /&gt;
! Parameter !! node1 !! node2&lt;br /&gt;
|-&lt;br /&gt;
! hostname&lt;br /&gt;
| ovz-node1&lt;br /&gt;
| ovz-node2&lt;br /&gt;
|-&lt;br /&gt;
! / filesystem&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
| hda1, 10 GB&lt;br /&gt;
|-&lt;br /&gt;
! swap space&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
| hda2, 2048 MB&lt;br /&gt;
|-&lt;br /&gt;
! public LAN&lt;br /&gt;
| eth0, 192.168.1.201&lt;br /&gt;
| eth0, 192.168.1.202&lt;br /&gt;
|-&lt;br /&gt;
! private LAN&lt;br /&gt;
| eth1, 192.168.255.1 (Gbit Ethernet)&lt;br /&gt;
| eth1, 192.168.255.2 (Gbit Ethernet)&lt;br /&gt;
|-&lt;br /&gt;
! other install options&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
| no firewall, no SELinux&lt;br /&gt;
|-&lt;br /&gt;
! package groups&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
| deactivated everything, only kept vim-enhanced&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Installing OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
Get the OpenVZ kernel and utilities and install them on both nodes. Update grub configuration to use the OpenVZ kernel by default and reboot both machines.&lt;br /&gt;
&lt;br /&gt;
== Setting up DRBD ==&lt;br /&gt;
&lt;br /&gt;
On each of the two nodes create a partition that acts as underlying DRBD device. The partitions should have exectly the same size (I created a 10 GB partition hda3 using fdisk on each node for this example). Note that it might be necessary to reboot the machines to re-read the partition table.&lt;br /&gt;
&lt;br /&gt;
Install the rpm of the DRBD userspace tools on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# rpm -ihv drbd-0.7.17-1.i386.rpm&lt;br /&gt;
Preparing...                ########################################### [100%]&lt;br /&gt;
   1:drbd                   ########################################### [100%]&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then create the drbd.conf configuration file and copy it to /etc/drbd.conf on both nodes. Below is the example configuration file that is used in this article:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
resource r0 {&lt;br /&gt;
  protocol C;&lt;br /&gt;
  incon-degr-cmd &amp;quot;echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  startup {&lt;br /&gt;
    degr-wfc-timeout 120;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  net {&lt;br /&gt;
    on-disconnect reconnect;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  disk {&lt;br /&gt;
    on-io-error   detach;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  syncer {&lt;br /&gt;
    rate 30M;&lt;br /&gt;
    group 1;&lt;br /&gt;
    al-extents 257;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node1 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.1:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  on ovz-node2 {&lt;br /&gt;
    device     /dev/drbd0;&lt;br /&gt;
    disk       /dev/hda3;&lt;br /&gt;
    address    192.168.255.2:7788;&lt;br /&gt;
    meta-disk  internal;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Start DRBD on both nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# /etc/init.d/drbd start&lt;br /&gt;
Starting DRBD resources:    [ d0 s0 n0 ].&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then check the status of /proc/drbd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:Connected st:Secondary/Secondary ld:Inconsistent&lt;br /&gt;
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both nodes are now Secondary and Inconsistent. The latter is because the underlying storage is not yet in-sync, and DRBD has no way to know whether you want the initial sync from ovz-node1 to ovz-node2, or ovz-node2 to ovz-node1. As there is no data below it yet, it does not matter.&lt;br /&gt;
&lt;br /&gt;
To start the sync from ovz-node1 to ovz-node2, do the following on ovz-node1:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# drbdadm -- --do-what-I-say primary all&lt;br /&gt;
[root@ovz-node1 ~]# cat /proc/drbd&lt;br /&gt;
version: 0.7.17 (api:77/proto:74)&lt;br /&gt;
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12&lt;br /&gt;
 0: cs:SyncSource st:Primary/Secondary ld:Consistent&lt;br /&gt;
    ns:627252 nr:0 dw:0 dr:629812 al:0 bm:38 lo:640 pe:0 ua:640 ap:0&lt;br /&gt;
        [=&amp;gt;..................] sync'ed:  6.6% (8805/9418)M&lt;br /&gt;
        finish: 0:04:51 speed: 30,888 (27,268) K/sec&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you see, DRBD syncs with about 30 MB per second, as we told it so in /etc/drbd.conf. On the SyncSource (ovz-node1 in this case) the DRBD device is already useable (although it is syncing in the background).&lt;br /&gt;
&lt;br /&gt;
So you can immediately create the filesystem:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mkfs.ext3 /dev/drbd0&lt;br /&gt;
[...]&lt;br /&gt;
[root@ovz-node1 ~]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy necessary OpenVZ files to DRBD device ===&lt;br /&gt;
&lt;br /&gt;
Currently, ovz-node1 is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on ovz-node1!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mount /dev/drbd0 /mnt&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /vz/* /mnt/&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# mkdir -p /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /etc/sysconfig/vz-scripts /mnt/cluster/etc/sysconfig&lt;br /&gt;
[root@ovz-node1 ~]# cp -a /var/vzquota /mnt/cluster/var&lt;br /&gt;
[root@ovz-node1 ~]# umount /dev/drbd0&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards move the original files and replace them with symbolic links (do this on both nodes):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz /etc/sysconfig/vz.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig&lt;br /&gt;
[root@ovz-node1 ~]# mv /var/vzquota /var/vzquota.orig&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz /etc/sysconfig/vz&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts&lt;br /&gt;
[root@ovz-node1 ~]# ln -s /vz/cluster/var/vzquota /var/vzquota&lt;br /&gt;
[root@ovz-node1 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting up Heartbeat ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
== How to do OpenVZ kernel updates when it contains a new DRBD version ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
== How to do updates of vzctl, vzctl-lib, and vzquota ==&lt;br /&gt;
&lt;br /&gt;
(I'll update this part soon)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Wfischer</name></author>
		
	</entry>
</feed>