Difference between revisions of "Using private IPs for Hardware Nodes"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (Prerequisites)
(use template:legacy)
 
(40 intermediate revisions by 13 users not shown)
Line 1: Line 1:
This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:
+
{{Legacy}}
 +
 
 +
This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology:
 +
 
 
[[Image:PrivateIPs_fig1.gif|An initial network topology]]
 
[[Image:PrivateIPs_fig1.gif|An initial network topology]]
 +
 +
== Using a spare IP in the same range ==
 +
If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:
 +
 +
<pre>[HN] ifconfig eth0:1 *.*.*.*
 +
[HN] vzctl set 101 --nameserver *.*.*.*</pre>
  
 
== Prerequisites ==
 
== Prerequisites ==
This configuration was tested on a RHEL5 OVZ Hardware Node and a VE based on a Fedora Core 5 template.
+
This configuration was tested on a RHEL5 OpenVZ Hardware Node and a container based on a Fedora Core 5 template.
Other host OSes and templates might require some configuration changes, please, add corresponding OS specific changes if you've faced any.<br>
+
Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.
This article assumes the presence of 'brctl', 'ip', 'ifconfig' utils thus might require installation of missed packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.
 
  
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.
+
This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.
{{Note|don't assign an IP after VE creation.}}
 
<br>
 
  
== (1) An OVZ Hardware Node has the only one ethernet interface ==
+
This article assumes you have already [[Quick installation|installed OpenVZ]],
 +
prepared the [[OS template cache]](s) and have
 +
[[Basic_operations_in_OpenVZ_environment|container(s) created]]. If not, follow the links to perform the steps needed.
 +
{{Note|don't assign an IP after container creation.}}
 +
 
 +
== An OVZ Hardware Node has the only one Ethernet interface ==
 
(assume eth0)
 
(assume eth0)
  
=== <u>Hardware Node configuration</u> ===
+
=== Hardware Node configuration ===
 +
 
 +
{{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the below commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}
  
 
==== Create a bridge device ====
 
==== Create a bridge device ====
<pre>[HN]# brctl addbr br0</pre>
+
[HN]# brctl addbr br0
  
 
==== Remove an IP from eth0 interface ====
 
==== Remove an IP from eth0 interface ====
<pre>[HN]# ifconfig eth0 0</pre>
+
[HN]# ifconfig eth0 0
  
 
==== Add eth0 interface into the bridge ====
 
==== Add eth0 interface into the bridge ====
<pre>[HN]# brctl addif br0 eth0</pre>
+
[HN]# brctl addif br0 eth0
 
   
 
   
 
==== Assign the IP to the bridge ====
 
==== Assign the IP to the bridge ====
 
(the same that was assigned on eth0 earlier)
 
(the same that was assigned on eth0 earlier)
<pre>[HN]# ifconfig br0 10.0.0.2/24</pre>
+
[HN]# ifconfig br0 10.0.0.2/24
  
 
==== Resurrect the default routing ====
 
==== Resurrect the default routing ====
<pre>[HN]# ip route add default via 10.0.0.1 dev br0</pre>
+
[HN]# ip route add default via 10.0.0.1 dev br0
 
   
 
   
{{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}
+
 
  
 
==== A script example ====
 
==== A script example ====
Line 46: Line 59:
 
</pre>
 
</pre>
  
<pre>[HN]# /tmp/br_add >/dev/null 2>&1 &</pre>
+
[HN]# /tmp/br_add >/dev/null 2>&1 &
<br>
 
=== <u>VE configuration</u> ===
 
  
==== Start a VE ====
+
=== Container configuration ===
<pre>[HN]# vzctl start 101</pre>
 
  
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====
+
==== Start a container ====
<pre>[HN]# vzctl set 101 --netif_add eth0 --save</pre>
+
[HN]# vzctl start 101
  
==== Set up an IP to the newly created VE's veth interface ====
+
==== Add a [[Virtual_Ethernet_device|veth interface]] to the container ====
<pre>[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26</pre>
+
[HN]# vzctl set 101 --netif_add eth0 --save
 +
 
 +
==== Set up an IP to the newly created container's veth interface ====
 +
[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26
 
   
 
   
==== Add the VE's veth interface to the bridge ====
+
==== Add the container's veth interface to the bridge ====
<pre>[HN]# brctl addif br0 veth101.0</pre>
+
[HN]# brctl addif br0 veth101.0
 +
 
 +
{{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.
 +
<!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ -->}}
  
==== Set up the default route for the VE ====
+
==== Set up the default route for the container ====
<pre>[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0</pre>
+
[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0
 
   
 
   
==== (Optional) Add routes VE <-> HN ====
+
==== (Optional) Add CT↔HN routes ====
The configuration above provides following connections available:
+
The above configuration provides the following connections:
<pre>
+
* CT X ↔ CT Y (where CT X and CT Y can locate on any OVZ HN)
VE X <-> VE Y (where VE X and VE Y can locate on any OVZ HN)
+
* CT   Internet
VE   <-> Internet
+
 
</pre>
+
Note that
* A VE accessibility from the HN depends on if the local gateway provides NAT or not (probably - yes).
+
 
* A HN accessibility from a VE depends on if the ISP gateway is aware about the local network addresses (most probably - no).
+
* The accessability of the CT from the HN depends on the local gateway providing NAT (probably - yes)
 +
 
 +
* The accessability of the HN from the CT depends on the ISP gateway being aware of the local network (probably not)
 +
 
 +
So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:
  
So to provide VE <-> HN accessibility despite the gateways' configuration you can add following route rules:
+
[HN]# ip route add 85.86.87.195 dev br0
<pre>
+
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0
[HN]# ip route add 85.86.87.195 dev br0
 
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0
 
</pre>
 
  
=== <u>The resulted OVZ Node configuration</u> ===
+
=== Resulting OpenVZ Node configuration ===
[[Image:PrivateIPs_fig2.gif|The resulted OVZ Node configuration]]
+
[[Image:PrivateIPs_fig2.gif|Resulting OpenVZ Node configuration]]
  
=== <u>Making the configuration persistent</u> ===
+
=== Making the configuration persistent ===
  
 
==== Set up a bridge on a HN ====
 
==== Set up a bridge on a HN ====
This can be done by configuring <code>ifcfg-*</code> files located in <code>/etc/sysconfig/network-scripts/</code>.
+
This can be done by configuring the <code>ifcfg-*</code> files located in <code>/etc/sysconfig/network-scripts/</code>.
  
 
Assuming you had a configuration file (e.g. <code>ifcfg-eth0</code>) like:
 
Assuming you had a configuration file (e.g. <code>ifcfg-eth0</code>) like:
Line 96: Line 113:
 
GATEWAY=10.0.0.1
 
GATEWAY=10.0.0.1
 
</pre>
 
</pre>
<br>
+
 
To make bridge <code>br0</code> automatically created you can create <code>ifcfg-br0</code>:
+
To automatically create bridge <code>br0</code> you can create <code>ifcfg-br0</code>:
 
<pre>
 
<pre>
 
DEVICE=br0
 
DEVICE=br0
Line 107: Line 124:
 
</pre>
 
</pre>
  
and edit <code>ifcfg-eth0</code> file to add <code>eth0</code> interface into the bridge <code>br0</code>:
+
and edit <code>ifcfg-eth0</code> to add the <code>eth0</code> interface into the bridge <code>br0</code>:
 
<pre>
 
<pre>
 
DEVICE=eth0
 
DEVICE=eth0
Line 114: Line 131:
 
</pre>
 
</pre>
  
==== Edit the VE's configuration ====
+
==== Edit the container's configuration ====
Add some parameters to the <code>/etc/vz/conf/$VEID.conf</code> which will be used during the network configuration:
+
Add these parameters to the <code>/etc/vz/conf/$CTID.conf</code> file which will be used during the network configuration:
* Add/change CONFIG_CUSTOMIZED="yes" (indicates that a custom script should be run on a VE start)
+
* Add <code>VETH_IP_ADDRESS="IP/MASK"</code> (a container can have multiple IPs separated by spaces)
* Add VETH_IP_ADDRESS="<VE IP>/<MASK>" (a VE can have multiple IPs separated by spaces)
+
* Add <code>VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"</code>
* Add VE_DEFAULT_GATEWAY="<VE DEFAULT GATEWAY>"
+
* Add <code>BRIDGEDEV="BRIDGE NAME"</code> (a bridge name to which the container veth interface should be added)
* Add BRIDGEDEV="<BRIDGE NAME>" (a bridge name to which the VE veth interface should be added)
 
  
 
An example:
 
An example:
 
<pre>
 
<pre>
 
# Network customization section
 
# Network customization section
CONFIG_CUSTOMIZED="yes"
 
 
VETH_IP_ADDRESS="85.86.87.195/26"
 
VETH_IP_ADDRESS="85.86.87.195/26"
 
VE_DEFAULT_GATEWAY="85.86.87.193"
 
VE_DEFAULT_GATEWAY="85.86.87.193"
Line 131: Line 146:
  
 
==== Create a custom network configuration script ====
 
==== Create a custom network configuration script ====
which should be called each time a VE started (e.g. <code>/usr/sbin/vznetcfg.custom</code>):
+
which should be called each time a container is started (e.g. <code>/usr/sbin/vznetcfg.custom</code>):
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash
 
# /usr/sbin/vznetcfg.custom
 
# /usr/sbin/vznetcfg.custom
# a script to bring up bridged network interfaces (veth's) in a VE
+
# a script to bring up bridged network interfaces (veth's) in a container
  
 
GLOBALCONFIGFILE=/etc/vz/vz.conf
 
GLOBALCONFIGFILE=/etc/vz/vz.conf
VECONFIGFILE=/etc/vz/conf/$VEID.conf
+
CTCONFIGFILE=/etc/vz/conf/$VEID.conf
 
vzctl=/usr/sbin/vzctl
 
vzctl=/usr/sbin/vzctl
 +
brctl=/usr/sbin/brctl
 
ip=/sbin/ip
 
ip=/sbin/ip
 +
ifconfig=/sbin/ifconfig
 
. $GLOBALCONFIGFILE
 
. $GLOBALCONFIGFILE
. $VECONFIGFILE
+
. $CTCONFIGFILE
  
 
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
 
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
 
for str in $NETIF_OPTIONS; do \
 
for str in $NETIF_OPTIONS; do \
 
         # getting 'ifname' parameter value
 
         # getting 'ifname' parameter value
         if [[ "$str" =~ "^ifname=" ]]; then
+
         if echo "$str" | grep -o "^ifname=" ; then
 
                 # remove the parameter name from the string (along with '=')
 
                 # remove the parameter name from the string (along with '=')
                 VEIFNAME=${str#*=};
+
                 CTIFNAME=${str#*=};
 
         fi
 
         fi
 
         # getting 'host_ifname' parameter value
 
         # getting 'host_ifname' parameter value
         if [[ "$str" =~ "^host_ifname=" ]]; then
+
         if echo "$str" | grep -o "^host_ifname=" ; then
 
                 # remove the parameter name from the string (along with '=')
 
                 # remove the parameter name from the string (along with '=')
 
                 VZHOSTIF=${str#*=};
 
                 VZHOSTIF=${str#*=};
Line 159: Line 176:
  
 
if [ ! -n "$VETH_IP_ADDRESS" ]; then
 
if [ ! -n "$VETH_IP_ADDRESS" ]; then
   echo "According to $CONFIGFILE VE$VEID has no veth IPs configured."
+
   echo "According to $CONFIGFILE CT$VEID has no veth IPs configured."
 
   exit 1
 
   exit 1
 
fi
 
fi
  
 
if [ ! -n "$VZHOSTIF" ]; then
 
if [ ! -n "$VZHOSTIF" ]; then
   echo "According to $CONFIGFILE VE$VEID has no veth interface configured."
+
   echo "According to $CONFIGFILE CT$VEID has no veth interface configured."
 
   exit 1
 
   exit 1
 
fi
 
fi
  
if [ ! -n "$VEIFNAME" ]; then
+
if [ ! -n "$CTIFNAME" ]; then
 
   echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
 
   echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
 
   exit 1
 
   exit 1
 
fi
 
fi
  
for IP in $VETH_IP_ADDRESS; do
+
echo "Initializing interface $VZHOSTIF for CT$VEID."
  echo "Initializing interface $VZHOSTIF for VE$VEID."
+
$ifconfig $VZHOSTIF 0
  /sbin/ifconfig $VZHOSTIF 0
 
done
 
  
VEROUTEDEV=$VZHOSTIF
+
CTROUTEDEV=$VZHOSTIF
  
 
if [ -n "$BRIDGEDEV" ]; then
 
if [ -n "$BRIDGEDEV" ]; then
 
   echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
 
   echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
   VEROUTEDEV=$BRIDGEDEV
+
   CTROUTEDEV=$BRIDGEDEV
   /usr/sbin/brctl addif $BRIDGEDEV $VZHOSTIF
+
   $brctl addif $BRIDGEDEV $VZHOSTIF
 
fi
 
fi
  
# Up the interface $VEIFNAME link in VE$VEID
+
# Up the interface $CTIFNAME link in CT$VEID
$vzctl exec $VEID $ip link set $VEIFNAME up
+
$vzctl exec $VEID $ip link set $CTIFNAME up
  
 
for IP in $VETH_IP_ADDRESS; do
 
for IP in $VETH_IP_ADDRESS; do
   echo "Adding an IP $IP to the $VEIFNAME for VE$VEID."
+
   echo "Adding an IP $IP to the $CTIFNAME for CT$VEID."
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME
+
   $vzctl exec $VEID $ip address add $IP dev $CTIFNAME
  
 
   # removing the netmask
 
   # removing the netmask
 
   IP_STRIP=${IP%%/*};
 
   IP_STRIP=${IP%%/*};
  
   echo "Adding a route from VE0 to VE$VEID."
+
   echo "Adding a route from CT0 to CT$VEID using $IP_STRIP."
   $ip route add $IP_STRIP dev $VEROUTEDEV
+
   $ip route add $IP_STRIP dev $CTROUTEDEV
 
done
 
done
  
if [ -n "$VE0_IP" ]; then
+
if [ -n "$CT0_IP" ]; then
   echo "Adding a route from VE$VEID to VE0."
+
   echo "Adding a route from CT$VEID to CT0."
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME
+
   $vzctl exec $VEID $ip route add $CT0_IP dev $CTIFNAME
 
fi
 
fi
  
 
if [ -n "$VE_DEFAULT_GATEWAY" ]; then
 
if [ -n "$VE_DEFAULT_GATEWAY" ]; then
   echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID."
+
   echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for CT$VEID."
 
   $vzctl exec $VEID \
 
   $vzctl exec $VEID \
         $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME
+
         $ip route add default via $VE_DEFAULT_GATEWAY dev $CTIFNAME
 
fi
 
fi
  
 
exit 0
 
exit 0
 
</pre>
 
</pre>
 +
<p><small>Note: this script can be easily extended to work for multiple triples &lt;bridge, ip address, veth device&gt;, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html </small></p>
 +
 +
==== Make the script to be run on a container start ====
 +
In order to run above script on a container start create the file
 +
<code>/etc/vz/vznet.conf</code> with the following contents:
 +
 +
EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
 +
 +
{{Note|<code>/usr/sbin/vznetcfg.custom</code> should be executable (chmod +x /usr/sbin/vznetcfg.custom)}}
 +
 +
{{Note|When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.}}
 +
 +
==== Create On-umount script for remove HW → CT route(s) ====
 +
which should be called each time a container with VEID (<code>/etc/vz/conf/$VEID.umount</code>), or any container (<code>/etc/vz/conf/vps.umount</code>) is stopped.
  
==== Make the script to be run on a VE start ====
 
In order to run above script on a VE start create the following <code>/etc/vz/vznet.conf</code> file:
 
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash
EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
+
# /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount
 +
# a script to remove routes to container with veth-bridge from bridge
 +
 
 +
CTCONFIGFILE=/etc/vz/conf/$VEID.conf
 +
ip=/sbin/ip
 +
. $CTCONFIGFILE
 +
 
 +
if [ ! -n "$VETH_IP_ADDRESS" ]; then
 +
  exit 0
 +
fi
 +
 
 +
if [ ! -n "$BRIDGEDEV" ]; then
 +
  exit 0
 +
fi
 +
 
 +
for IP in $VETH_IP_ADDRESS; do
 +
  # removing the netmask
 +
  IP_STRIP=${IP%%/*};
 +
 
 +
  echo "Remove a route from CT0 to CT$VEID using $IP_STRIP."
 +
  $ip route del $IP_STRIP dev $BRIDGEDEV
 +
done
 +
 
 +
exit 0
 
</pre>
 
</pre>
{{Note|both <code>/etc/vz/vznet.conf</code> and <code>/usr/sbin/vznetcfg.custom</code> should be executable files.}}
 
  
==== Setting the route VE -> HN ====
+
{{Note|The script should be executable (chmod +x /etc/vz/conf/vps.umount)}}
To set up a route from VE to HN the custom script has to get a HN IP (the $VE0_IP variable in the script). There can be different approaches to specify it:
+
 
# Add an entry VE0_IP="VE0 IP" to the <code>$VEID.conf</code>
+
==== Setting the route CT → HN ====
# Add an entry VE0_IP="VE0 IP" to the <code>/etc/vz/vz.conf</code> (the global configuration config file)
+
To set up a route from the CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script
+
 
All the variants have their pros and cons, nevertheless for HN static IP configuration variant 2 seems acceptable (and the most simple).
+
# Add an entry CT0_IP="CT0 IP" to the <code>$VEID.conf</code>
 +
# Add an entry CT0_IP="CT0 IP" to the <code>/etc/vz/vz.conf</code> (the global configuration config file)
 +
# Implement some smart algorithm to determine the CT0 IP right in the custom network configuration script
 +
 
 +
Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).
 +
 
 +
== An OpenVZ Hardware Node has two Ethernet interfaces ==
 +
Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from  external traffic.
 +
Let's assign eth0 for the external traffic and eth1 for the local one.
 +
 
 +
If there is no need to make the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:
 +
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]
 +
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]
 +
 
 +
It is nesessary to set a local IP for 'br0' to ensure CT ↔ HN connection availability.
  
== (2) An OVZ Hardware Node has two ethernet interfaces (TODO) ==
+
== Putting containers to different subnetworks ==
(assume eth0 and eth1)
+
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the
 +
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_container.27s_configuration|above configuration]].
  
 +
== See also ==
 +
* [[Virtual network device]]
 +
* [[Differences between venet and veth]]
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]
 
[[Category: Networking]]
 
[[Category: Networking]]

Latest revision as of 21:39, 12 September 2016

Warning.svg Warning: This article is applicable for legacy OpenVZ (pre-VZ7) only. For Virtuozzo 7 documentation, see https://docs.openvz.org.

This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology:

An initial network topology

Using a spare IP in the same range[edit]

If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:

[HN] ifconfig eth0:1 *.*.*.*
[HN] vzctl set 101 --nameserver *.*.*.*

Prerequisites[edit]

This configuration was tested on a RHEL5 OpenVZ Hardware Node and a container based on a Fedora Core 5 template. Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.

This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.

This article assumes you have already installed OpenVZ, prepared the OS template cache(s) and have container(s) created. If not, follow the links to perform the steps needed.

Yellowpin.svg Note: don't assign an IP after container creation.

An OVZ Hardware Node has the only one Ethernet interface[edit]

(assume eth0)

Hardware Node configuration[edit]

Warning.svg Warning: if you are configuring the node remotely you must prepare a script with the below commands and run it in background with the redirected output or you'll lose the access to the Node.

Create a bridge device[edit]

[HN]# brctl addbr br0

Remove an IP from eth0 interface[edit]

[HN]# ifconfig eth0 0

Add eth0 interface into the bridge[edit]

[HN]# brctl addif br0 eth0

Assign the IP to the bridge[edit]

(the same that was assigned on eth0 earlier)

[HN]# ifconfig br0 10.0.0.2/24

Resurrect the default routing[edit]

[HN]# ip route add default via 10.0.0.1 dev br0


A script example[edit]

[HN]# cat /tmp/br_add 
#!/bin/bash

brctl addbr br0
ifconfig eth0 0 
brctl addif br0 eth0 
ifconfig br0 10.0.0.2/24 
ip route add default via 10.0.0.1 dev br0
[HN]# /tmp/br_add >/dev/null 2>&1 &

Container configuration[edit]

Start a container[edit]

[HN]# vzctl start 101

Add a veth interface to the container[edit]

[HN]# vzctl set 101 --netif_add eth0 --save

Set up an IP to the newly created container's veth interface[edit]

[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26

Add the container's veth interface to the bridge[edit]

[HN]# brctl addif br0 veth101.0
Yellowpin.svg Note: There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.

Set up the default route for the container[edit]

[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0

(Optional) Add CT↔HN routes[edit]

The above configuration provides the following connections:

  • CT X ↔ CT Y (where CT X and CT Y can locate on any OVZ HN)
  • CT ↔ Internet

Note that

  • The accessability of the CT from the HN depends on the local gateway providing NAT (probably - yes)
  • The accessability of the HN from the CT depends on the ISP gateway being aware of the local network (probably not)

So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:

[HN]# ip route add 85.86.87.195 dev br0
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0

Resulting OpenVZ Node configuration[edit]

Resulting OpenVZ Node configuration

Making the configuration persistent[edit]

Set up a bridge on a HN[edit]

This can be done by configuring the ifcfg-* files located in /etc/sysconfig/network-scripts/.

Assuming you had a configuration file (e.g. ifcfg-eth0) like:

DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.2
NETMASK=255.255.255.0
GATEWAY=10.0.0.1

To automatically create bridge br0 you can create ifcfg-br0:

DEVICE=br0
TYPE=Bridge
ONBOOT=yes
IPADDR=10.0.0.2
NETMASK=255.255.255.0
GATEWAY=10.0.0.1

and edit ifcfg-eth0 to add the eth0 interface into the bridge br0:

DEVICE=eth0
ONBOOT=yes
BRIDGE=br0

Edit the container's configuration[edit]

Add these parameters to the /etc/vz/conf/$CTID.conf file which will be used during the network configuration:

  • Add VETH_IP_ADDRESS="IP/MASK" (a container can have multiple IPs separated by spaces)
  • Add VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"
  • Add BRIDGEDEV="BRIDGE NAME" (a bridge name to which the container veth interface should be added)

An example:

# Network customization section
VETH_IP_ADDRESS="85.86.87.195/26"
VE_DEFAULT_GATEWAY="85.86.87.193"
BRIDGEDEV="br0"

Create a custom network configuration script[edit]

which should be called each time a container is started (e.g. /usr/sbin/vznetcfg.custom):

#!/bin/bash
# /usr/sbin/vznetcfg.custom
# a script to bring up bridged network interfaces (veth's) in a container

GLOBALCONFIGFILE=/etc/vz/vz.conf
CTCONFIGFILE=/etc/vz/conf/$VEID.conf
vzctl=/usr/sbin/vzctl
brctl=/usr/sbin/brctl
ip=/sbin/ip
ifconfig=/sbin/ifconfig
. $GLOBALCONFIGFILE
. $CTCONFIGFILE

NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
for str in $NETIF_OPTIONS; do \
        # getting 'ifname' parameter value
        if  echo "$str" | grep -o "^ifname=" ; then
                # remove the parameter name from the string (along with '=')
                CTIFNAME=${str#*=};
        fi
        # getting 'host_ifname' parameter value
        if  echo "$str" | grep -o "^host_ifname=" ; then
                # remove the parameter name from the string (along with '=')
                VZHOSTIF=${str#*=};
        fi
done

if [ ! -n "$VETH_IP_ADDRESS" ]; then
   echo "According to $CONFIGFILE CT$VEID has no veth IPs configured."
   exit 1
fi

if [ ! -n "$VZHOSTIF" ]; then
   echo "According to $CONFIGFILE CT$VEID has no veth interface configured."
   exit 1
fi

if [ ! -n "$CTIFNAME" ]; then
   echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
   exit 1
fi

echo "Initializing interface $VZHOSTIF for CT$VEID."
$ifconfig $VZHOSTIF 0

CTROUTEDEV=$VZHOSTIF

if [ -n "$BRIDGEDEV" ]; then
   echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
   CTROUTEDEV=$BRIDGEDEV
   $brctl addif $BRIDGEDEV $VZHOSTIF
fi

# Up the interface $CTIFNAME link in CT$VEID
$vzctl exec $VEID $ip link set $CTIFNAME up

for IP in $VETH_IP_ADDRESS; do
   echo "Adding an IP $IP to the $CTIFNAME for CT$VEID."
   $vzctl exec $VEID $ip address add $IP dev $CTIFNAME

   # removing the netmask
   IP_STRIP=${IP%%/*};

   echo "Adding a route from CT0 to CT$VEID using $IP_STRIP."
   $ip route add $IP_STRIP dev $CTROUTEDEV
done

if [ -n "$CT0_IP" ]; then
   echo "Adding a route from CT$VEID to CT0."
   $vzctl exec $VEID $ip route add $CT0_IP dev $CTIFNAME
fi

if [ -n "$VE_DEFAULT_GATEWAY" ]; then
   echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for CT$VEID."
   $vzctl exec $VEID \
        $ip route add default via $VE_DEFAULT_GATEWAY dev $CTIFNAME
fi

exit 0

Note: this script can be easily extended to work for multiple triples <bridge, ip address, veth device>, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html

Make the script to be run on a container start[edit]

In order to run above script on a container start create the file /etc/vz/vznet.conf with the following contents:

EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
Yellowpin.svg Note: /usr/sbin/vznetcfg.custom should be executable (chmod +x /usr/sbin/vznetcfg.custom)
Yellowpin.svg Note: When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.

Create On-umount script for remove HW → CT route(s)[edit]

which should be called each time a container with VEID (/etc/vz/conf/$VEID.umount), or any container (/etc/vz/conf/vps.umount) is stopped.

#!/bin/bash
# /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount
# a script to remove routes to container with veth-bridge from bridge 

CTCONFIGFILE=/etc/vz/conf/$VEID.conf
ip=/sbin/ip
. $CTCONFIGFILE

if [ ! -n "$VETH_IP_ADDRESS" ]; then
   exit 0
fi

if [ ! -n "$BRIDGEDEV" ]; then
   exit 0
fi

for IP in $VETH_IP_ADDRESS; do
   # removing the netmask
   IP_STRIP=${IP%%/*};
   
   echo "Remove a route from CT0 to CT$VEID using $IP_STRIP."
   $ip route del $IP_STRIP dev $BRIDGEDEV
done

exit 0
Yellowpin.svg Note: The script should be executable (chmod +x /etc/vz/conf/vps.umount)

Setting the route CT → HN[edit]

To set up a route from the CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:

  1. Add an entry CT0_IP="CT0 IP" to the $VEID.conf
  2. Add an entry CT0_IP="CT0 IP" to the /etc/vz/vz.conf (the global configuration config file)
  3. Implement some smart algorithm to determine the CT0 IP right in the custom network configuration script

Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).

An OpenVZ Hardware Node has two Ethernet interfaces[edit]

Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from external traffic. Let's assign eth0 for the external traffic and eth1 for the local one.

If there is no need to make the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:

It is nesessary to set a local IP for 'br0' to ensure CT ↔ HN connection availability.

Putting containers to different subnetworks[edit]

It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the above configuration.

See also[edit]