Keepalived

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search

Since I found it difficult to gather the necessary information to do this, I am documenting what worked for me here.

While this worked for me, and I will attempt to make these instructions as general as possible, I encourage anybody who has any issues to make appropriate modifications and additions!

Using keepalived inside OpenVZ guests[edit]

rough draft, i will return to format nicely later

Terminology[edit]

To reduce my typing, and hopefully make the document more clear and explicit, I will be using the following terms and abbreviations. I will try to pick what seems consistent to what is used in OpenVZ itself:

  • Virtual Environment [VE]: A 'guest' operating system installed and running under OpenVZ's control
  • Host Node [HN]: The physical server and OS hosting the OpenVZ Virtual Environments as 'guests'

Prerequisites[edit]

First, make sure you have OpenVZ installed and configured on your HN, according to the instructions proscribed for your platform (eg. debian, redhat, gentoo, etc...). Make sure it works, including the openvz kernel modules, etc.

You will need to set up at two new VEs (Virtual Environments/Nodes) in order to make best use of keepalived. Set them up with a minimal amount of configuration for now (unless you're going to do this on some pre-existing VEs you're already using)

You will need an ethernet interface that can host more than one address in the same physical network. For example, on your own LAN. Doing this on an interface connected directly to, say, a home-cable-modem will probably not work properly.

Finally, and this is important: You either should have physical access to a console on the HN -or- be VERY comfortable with modifying your network settings over SSH The first time I did this, I made a mistake and needed to call someone to reboot the system, since it was 3000 miles away.


Host Node Setup[edit]

Install Required Software[edit]

On a Debian-based HN[edit]

IPVS[edit]

Important! You have to turn on NET_ADMIN capability for container:

vzctl set $VEID --capability "NET_ADMIN:on" --save

Then restart the container


Install ipvsadm, then set all necessary modules to load at boot, and load them now.

[HN]$ sudo -i
[HN]# aptitude install ipvsadm
[HN]# for mod in ip_vs ip_vs_dh ip_vs_ftp ip_vs_lblc ip_vs_lblcr \
                  ip_vs_lc ip_vs_nq ip_vs_rr ip_vs_sed ip_vs_sh \
                  ip_vs_wlc ip_vs_wrr; 
       do 
           grep $mod /etc/modules >/dev/null ||
               echo $mod >> /etc/modules; 
           modprobe -a $mod; 
       done

Note: you probably don't need all of these, but I got sick of trying to find the minimal set and gave up :)


Bridge Utils[edit]

This provides the brctl command, should already be installed, but I won't take that for granted...

[HN]$ sudo aptitude install bridge-utils


Virtual Environment Setup[edit]

You can get real servers inside container running using veth interface connected to linux bridge. But problem is that real server inside container and virtual server (ipvs director) can not coexists in the same HN (hardware node) wherever ipvs reside either in the host or in the guest. The reason is that ipvs is not isolated in or from container and when ipip get unpacked it treated as if it come from client directly and go into ipvs handle again. This cause endless loop.