Changes

Jump to: navigation, search

Using private IPs for Hardware Nodes

137 bytes added, 13:12, 11 March 2008
m
VE->CT, container
== Prerequisites ==
This configuration was tested on a RHEL5 OpenVZ Hardware Node and a VE container based on a Fedora Core 5 template.
Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.
This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VEcontainer(s) created]]. If not, follow the links to perform the steps needed.{{Note|don't assign an IP after VE container creation.}}
== An OVZ Hardware Node has the only one Ethernet interface ==
[HN]# /tmp/br_add >/dev/null 2>&1 &
=== VE Container configuration ===
==== Start a VE container ====
[HN]# vzctl start 101
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE container ====
[HN]# vzctl set 101 --netif_add eth0 --save
==== Set up an IP to the newly created VEcontainer's veth interface ====
[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26
==== Add the VEcontainer's veth interface to the bridge ====
[HN]# brctl addif br0 veth101.0
<!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ -->}}
==== Set up the default route for the VE container ====
[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0
==== (Optional) Add VE↔HN CT↔HN routes ====
The above configuration provides the following connections:
* VE CT X ↔ VE CT Y (where VE CT X and VE CT Y can locate on any OVZ HN)* VE CT ↔ Internet
Note that
* The accessability of the VE CT from the HN depends on the local gateway providing NAT(probably - yes)
* The accessability of the HN from the VE CT depends on the ISP gateway being aware of the local network(probably not)
 So to provide VE CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:
[HN]# ip route add 85.86.87.195 dev br0
</pre>
==== Edit the VEcontainer's configuration ====Add these parameters to the <code>/etc/vz/conf/$VEIDCTID.conf</code> file which will be used during the network configuration:* Add/change <code>CONFIG_CUSTOMIZED="yes"</code> (indicates that a custom script should be run on a VE container start)* Add <code>VETH_IP_ADDRESS="VE IP/MASK"</code> (a VE container can have multiple IPs separated by spaces)* Add <code>VE_DEFAULT_GATEWAY="VE CT DEFAULT GATEWAY"</code>* Add <code>BRIDGEDEV="BRIDGE NAME"</code> (a bridge name to which the VE container veth interface should be added)
An example:
==== Create a custom network configuration script ====
which should be called each time a VE container is started (e.g. <code>/usr/sbin/vznetcfg.custom</code>):
<pre>
#!/bin/bash
# /usr/sbin/vznetcfg.custom
# a script to bring up bridged network interfaces (veth's) in a VEcontainer
GLOBALCONFIGFILE=/etc/vz/vz.conf
VECONFIGFILECTCONFIGFILE=/etc/vz/conf/$VEID.conf
vzctl=/usr/sbin/vzctl
brctl=/usr/sbin/brctl
ifconfig=/sbin/ifconfig
. $GLOBALCONFIGFILE
. $VECONFIGFILECTCONFIGFILE
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
if [[ "$str" =~ "^ifname=" ]]; then
# remove the parameter name from the string (along with '=')
VEIFNAMECTIFNAME=${str#*=};
fi
# getting 'host_ifname' parameter value
if [ ! -n "$VETH_IP_ADDRESS" ]; then
echo "According to $CONFIGFILE VECT$VEID has no veth IPs configured."
exit 1
fi
if [ ! -n "$VZHOSTIF" ]; then
echo "According to $CONFIGFILE VECT$VEID has no veth interface configured."
exit 1
fi
if [ ! -n "$VEIFNAMECTIFNAME" ]; then
echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
exit 1
fi
echo "Initializing interface $VZHOSTIF for VECT$VEID."
$ifconfig $VZHOSTIF 0
VEROUTEDEVCTROUTEDEV=$VZHOSTIF
if [ -n "$BRIDGEDEV" ]; then
echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
VEROUTEDEVCTROUTEDEV=$BRIDGEDEV
$brctl addif $BRIDGEDEV $VZHOSTIF
fi
# Up the interface $VEIFNAME CTIFNAME link in VECT$VEID$vzctl exec $VEID $ip link set $VEIFNAME CTIFNAME up
for IP in $VETH_IP_ADDRESS; do
echo "Adding an IP $IP to the $VEIFNAME CTIFNAME for VECT$VEID." $vzctl exec $VEID $ip address add $IP dev $VEIFNAMECTIFNAME
# removing the netmask
IP_STRIP=${IP%%/*};
echo "Adding a route from CT0 to VECT$VEID." $ip route add $IP_STRIP dev $VEROUTEDEVCTROUTEDEV
done
if [ -n "$CT0_IP" ]; then
echo "Adding a route from VECT$VEID to CT0." $vzctl exec $VEID $ip route add $CT0_IP dev $VEIFNAMECTIFNAME
fi
if [ -n "$VE_DEFAULT_GATEWAY" ]; then
echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for VECT$VEID."
$vzctl exec $VEID \
$ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAMECTIFNAME
fi
<p><small>Note: this script can be easily extended to work for multiple triples &lt;bridge, ip address, veth device&gt;, see http://vireso.blogspot.com/2008/02/2-veth-with-2-brindges-on-openvz-at.html </small></p>
==== Make the script to be run on a VE container start ====In order to run above script on a VE container start create the file <code>/etc/vz/vznet.conf</code> with the following contents:
EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
{{Note|<code>/usr/sbin/vznetcfg.custom</code> should be executable.(chmod +x /usr/sbin/vznetcfg.custom)}}
==== Setting the route VE CT → HN ====To set up a route from the VE CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:
# Add an entry CT0_IP="CT0 IP" to the <code>$VEID.conf</code>
Let's assign eth0 for the external traffic and eth1 for the local one.
If there is no need to make the VE container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]
It is nesessary to set a local IP for 'br0' to ensure VE CT ↔ HN connection availability.
== Putting containers to different subnetworks ==
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VEEdit_the_container.27s_configuration|above configuration]].
== See also ==

Navigation menu