Open main menu

OpenVZ Virtuozzo Containers Wiki β

Using private IPs for Hardware Nodes

Revision as of 09:24, 21 October 2011 by 59.36.98.57 (talk) (Container configuration)

This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology:

An initial network topology

Contents

Using a spare IP in the same range

If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:

[HN] ifconfig eth0:1 *.*.*.*
[HN] vzctl set 101 --nameserver *.*.*.*

Prerequisites

This configuration was tested on a RHEL5 OpenVZ Hardware Node and a container based on a Fedora Core 5 template. Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.

This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.

This article assumes you have already installed OpenVZ, prepared the OS template cache(s) and have container(s) created. If not, follow the links to perform the steps needed.

  Note: don't assign an IP after container creation.

An OVZ Hardware Node has the only one Ethernet interface

(assume eth0)

Hardware Node configuration

Create a bridge device

[HN]# brctl addbr br0

Remove an IP from eth0 interface

[HN]# ifconfig eth0 0

Add eth0 interface into the bridge

[HN]# brctl addif br0 eth0

Assign the IP to the bridge

(the same that was assigned on eth0 earlier)

[HN]# ifconfig br0 10.0.0.2/24

Resurrect the default routing

[HN]# ip route add default via 10.0.0.1 dev br0

  Warning: if you are configuring the node remotely you must prepare a script with the above commands and run it in background with the redirected output or you'll lose the access to the Node.

A script example

[HN]# cat /tmp/br_add 
#!/bin/bash

brctl addbr br0
ifconfig eth0 0 
brctl addif br0 eth0 
ifconfig br0 10.0.0.2/24 
ip route add default via 10.0.0.1 dev br0
[HN]# /tmp/br_add >/dev/null 2>&1 &

Way to use the internet to help ppeloe solve problems!

Resulting OpenVZ Node configuration

 

Making the configuration persistent

Set up a bridge on a HN

This can be done by configuring the ifcfg-* files located in /etc/sysconfig/network-scripts/.

Assuming you had a configuration file (e.g. ifcfg-eth0) like:

DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.2
NETMASK=255.255.255.0
GATEWAY=10.0.0.1

To automatically create bridge br0 you can create ifcfg-br0:

DEVICE=br0
TYPE=Bridge
ONBOOT=yes
IPADDR=10.0.0.2
NETMASK=255.255.255.0
GATEWAY=10.0.0.1

and edit ifcfg-eth0 to add the eth0 interface into the bridge br0:

DEVICE=eth0
ONBOOT=yes
BRIDGE=br0

Edit the container's configuration

Add these parameters to the /etc/vz/conf/$CTID.conf file which will be used during the network configuration:

  • Add VETH_IP_ADDRESS="IP/MASK" (a container can have multiple IPs separated by spaces)
  • Add VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"
  • Add BRIDGEDEV="BRIDGE NAME" (a bridge name to which the container veth interface should be added)

An example:

# Network customization section
VETH_IP_ADDRESS="85.86.87.195/26"
VE_DEFAULT_GATEWAY="85.86.87.193"
BRIDGEDEV="br0"

Create a custom network configuration script

which should be called each time a container is started (e.g. /usr/sbin/vznetcfg.custom):

#!/bin/bash
# /usr/sbin/vznetcfg.custom
# a script to bring up bridged network interfaces (veth's) in a container

GLOBALCONFIGFILE=/etc/vz/vz.conf
CTCONFIGFILE=/etc/vz/conf/$VEID.conf
vzctl=/usr/sbin/vzctl
brctl=/usr/sbin/brctl
ip=/sbin/ip
ifconfig=/sbin/ifconfig
. $GLOBALCONFIGFILE
. $CTCONFIGFILE

NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
for str in $NETIF_OPTIONS; do \
        # getting 'ifname' parameter value
        if  echo "$str" | grep -o "^ifname=" ; then
                # remove the parameter name from the string (along with '=')
                CTIFNAME=${str#*=};
        fi
        # getting 'host_ifname' parameter value
        if  echo "$str" | grep -o "^host_ifname=" ; then
                # remove the parameter name from the string (along with '=')
                VZHOSTIF=${str#*=};
        fi
done

if [ ! -n "$VETH_IP_ADDRESS" ]; then
   echo "According to $CONFIGFILE CT$VEID has no veth IPs configured."
   exit 1
fi

if [ ! -n "$VZHOSTIF" ]; then
   echo "According to $CONFIGFILE CT$VEID has no veth interface configured."
   exit 1
fi

if [ ! -n "$CTIFNAME" ]; then
   echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
   exit 1
fi

echo "Initializing interface $VZHOSTIF for CT$VEID."
$ifconfig $VZHOSTIF 0

CTROUTEDEV=$VZHOSTIF

if [ -n "$BRIDGEDEV" ]; then
   echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
   CTROUTEDEV=$BRIDGEDEV
   $brctl addif $BRIDGEDEV $VZHOSTIF
fi

# Up the interface $CTIFNAME link in CT$VEID
$vzctl exec $VEID $ip link set $CTIFNAME up

for IP in $VETH_IP_ADDRESS; do
   echo "Adding an IP $IP to the $CTIFNAME for CT$VEID."
   $vzctl exec $VEID $ip address add $IP dev $CTIFNAME

   # removing the netmask
   IP_STRIP=${IP%%/*};

   echo "Adding a route from CT0 to CT$VEID using $IP_STRIP."
   $ip route add $IP_STRIP dev $CTROUTEDEV
done

if [ -n "$CT0_IP" ]; then
   echo "Adding a route from CT$VEID to CT0."
   $vzctl exec $VEID $ip route add $CT0_IP dev $CTIFNAME
fi

if [ -n "$VE_DEFAULT_GATEWAY" ]; then
   echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for CT$VEID."
   $vzctl exec $VEID \
        $ip route add default via $VE_DEFAULT_GATEWAY dev $CTIFNAME
fi

exit 0

Note: this script can be easily extended to work for multiple triples <bridge, ip address, veth device>, see http://vireso.blogspot.com/2008/02/2-veth-with-2-brindges-on-openvz-at.html

Make the script to be run on a container start

In order to run above script on a container start create the file /etc/vz/vznet.conf with the following contents:

EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
  Note: /usr/sbin/vznetcfg.custom should be executable (chmod +x /usr/sbin/vznetcfg.custom)
  Note: When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.

Create On-umount script for remove HW → CT route(s)

which should be called each time a container with VEID (/etc/vz/conf/$VEID.umount), or any container (/etc/vz/conf/vps.umount) is stop.

#!/bin/bash
# /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount
# a script to remove routes to container with veth-bridge from bridge 

CTCONFIGFILE=/etc/vz/conf/$VEID.conf
ip=/sbin/ip
. $CTCONFIGFILE

if [ ! -n "$VETH_IP_ADDRESS" ]; then
   exit 0
fi

if [ ! -n "$BRIDGEDEV" ]; then
   exit 0
fi

for IP in $VETH_IP_ADDRESS; do
   # removing the netmask
   IP_STRIP=${IP%%/*};
   
   echo "Remove a route from CT0 to CT$VEID using $IP_STRIP."
   $ip route del $IP_STRIP dev $BRIDGEDEV
done

exit 0
  Note: The script should be executable (chmod +x /etc/vz/conf/vps.umount)

Setting the route CT → HN

To set up a route from the CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:

  1. Add an entry CT0_IP="CT0 IP" to the $VEID.conf
  2. Add an entry CT0_IP="CT0 IP" to the /etc/vz/vz.conf (the global configuration config file)
  3. Implement some smart algorithm to determine the CT0 IP right in the custom network configuration script

Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).

An OpenVZ Hardware Node has two Ethernet interfaces

Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from external traffic. Let's assign eth0 for the external traffic and eth1 for the local one.

If there is no need to make the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:

It is nesessary to set a local IP for 'br0' to ensure CT ↔ HN connection availability.

Putting containers to different subnetworks

It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the above configuration.

See also