Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

Using private IPs for Hardware Nodes

1,552 bytes added, 21:39, 12 September 2016
use template:legacy
{{Legacy}} This article describes how to assign public IPs to VEs containers running on OVZ Hardware Nodes in case you have a following network topology: 
[[Image:PrivateIPs_fig1.gif|An initial network topology]]
 
== Using a spare IP in the same range ==
If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:
 
<pre>[HN] ifconfig eth0:1 *.*.*.*
[HN] vzctl set 101 --nameserver *.*.*.*</pre>
== Prerequisites ==
This configuration was tested on a RHEL5 OVZ OpenVZ Hardware Node and a VE container based on a Fedora Core 5 template.Other host OSes OSs and templates might require some configuration changes, please, add corresponding OS specific changes if you've faced any.<br>This article assumes the presence of 'brctl', 'ip', 'ifconfig' utils thus might require installation of missed packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.
This article assumes you have already [[Quick installation|installed OpenVZ]]the presence of 'brctl', prepared the [[OS template cache]](s) 'ip' and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]'ifconfig' utils. If not, follow the links You may need to perform the steps needed.{{Note|doninstall missing packages like 'bridge-utils'/'iproute'/'net-tools't assign an IP after VE creationor others which contain those utilities.}}<br>
This article assumes you have already [[Quick installation|installed OpenVZ]],prepared the [[OS template cache]](s) and have[[Basic_operations_in_OpenVZ_environment|container(s) created]]. If not, follow the links to perform the steps needed.{{Note|don't assign an IP after container creation.}} == (1) An OVZ Hardware Node has the only one ethernet Ethernet interface ==
(assume eth0)
=== <u>Hardware Node configuration</u> === {{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the below commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}
==== Create a bridge device ====
<pre> [HN]# brctl addbr br0</pre>
==== Remove an IP from eth0 interface ====
<pre> [HN]# ifconfig eth0 0</pre>
==== Add eth0 interface into the bridge ====
<pre> [HN]# brctl addif br0 eth0</pre>
==== Assign the IP to the bridge ====
(the same that was assigned on eth0 earlier)
<pre> [HN]# ifconfig br0 10.0.0.2/24</pre>
==== Resurrect the default routing ====
<pre> [HN]# ip route add default via 10.0.0.1 dev br0</pre>
{{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}
==== A script example ====
</pre>
<pre> [HN]# /tmp/br_add >/dev/null 2>&1 &</pre><br>=== <u>VE configuration</u> ===
===Container configuration = Start a VE ====<pre>[HN]# vzctl start 101</pre>
==== Add Start a [[Virtual_Ethernet_device|veth interface]] to the VE container ====<pre> [HN]# vzctl set start 101 --netif_add eth0 --save</pre>
==== Add a [[Virtual_Ethernet_device|veth interface]] to the container ==== [HN]# vzctl set 101 --netif_add eth0 --save ==== Set up an IP to the newly created VEcontainer's veth interface ====<pre> [HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26</pre>
==== Add the VEcontainer's veth interface to the bridge ====<pre> [HN]# brctl addif br0 veth101.0</pre>
{{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.<!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ -->}} ==== Set up the default route for the VE container ====<pre> [HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0</pre>
==== (Optional) Add CT↔HN routes VE <-> HN ====The above configuration above provides the following connections available:<pre>VE * CT X <-> VE ↔ CT Y (where VE CT X and VE CT Y can locate on any OVZ HN)VE * CT <-> Internet</pre>Note that * A VE accessibility The accessability of the CT from the HN depends on if the local gateway provides providing NAT or not (probably - yes). * A The accessability of the HN accessibility from a VE the CT depends on if the ISP gateway is being aware about of the local network addresses (most probably - nonot). So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:
So to provide VE <-> HN accessibility despite the gateways' configuration you can add following route rules:<pre> [HN]# ip route add 85.86.87.195 dev br0 [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0</pre>
=== <u>The resulted OVZ Resulting OpenVZ Node configuration</u> ===[[Image:PrivateIPs_fig2.gif|The resulted OVZ Resulting OpenVZ Node configuration]]
=== <u>Making the configuration persistent</u> ===
==== Set up a bridge on a HN ====
This can be done by configuring the <code>ifcfg-*</code> files located in <code>/etc/sysconfig/network-scripts/</code>.
Assuming you had a configuration file (e.g. <code>ifcfg-eth0</code>) like:
GATEWAY=10.0.0.1
</pre>
<br>To make automatically create bridge <code>br0</code> automatically created you can create <code>ifcfg-br0</code>:
<pre>
DEVICE=br0
</pre>
and edit <code>ifcfg-eth0</code> file to add the <code>eth0</code> interface into the bridge <code>br0</code>:
<pre>
DEVICE=eth0
</pre>
==== Edit the VEcontainer's configuration ====Add some these parameters to the <code>/etc/vz/conf/$VEIDCTID.conf</code> file which will be used during the network configuration:* Add/change CONFIG_CUSTOMIZED="yes" (indicates that a custom script should be run on a VE start)* Add <code>VETH_IP_ADDRESS="<VE IP>/MASK"<MASK/code>" (a VE container can have multiple IPs separated by spaces)* Add <code>VE_DEFAULT_GATEWAY="<VE CT DEFAULT GATEWAY"</code>"* Add <code>BRIDGEDEV="<BRIDGE NAME"</code>" (a bridge name to which the VE container veth interface should be added)
An example:
<pre>
# Network customization section
CONFIG_CUSTOMIZED="yes"
VETH_IP_ADDRESS="85.86.87.195/26"
VE_DEFAULT_GATEWAY="85.86.87.193"
==== Create a custom network configuration script ====
which should be called each time a VE container is started (e.g. <code>/usr/sbin/vznetcfg.custom</code>):
<pre>
#!/bin/bash
# /usr/sbin/vznetcfg.custom
# a script to bring up bridged network interfaces (veth's) in a VEcontainer
GLOBALCONFIGFILE=/etc/vz/vz.conf
VECONFIGFILECTCONFIGFILE=/etc/vz/conf/$VEID.conf
vzctl=/usr/sbin/vzctl
brctl=/usr/sbin/brctl
ip=/sbin/ip
ifconfig=/sbin/ifconfig
. $GLOBALCONFIGFILE
. $VECONFIGFILECTCONFIGFILE
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
for str in $NETIF_OPTIONS; do \
# getting 'ifname' parameter value
if [[ echo "$str" =~ | grep -o "^ifname=" ]]; then
# remove the parameter name from the string (along with '=')
VEIFNAMECTIFNAME=${str#*=};
fi
# getting 'host_ifname' parameter value
if [[ echo "$str" =~ | grep -o "^host_ifname=" ]]; then
# remove the parameter name from the string (along with '=')
VZHOSTIF=${str#*=};
if [ ! -n "$VETH_IP_ADDRESS" ]; then
echo "According to $CONFIGFILE VECT$VEID has no veth IPs configured."
exit 1
fi
if [ ! -n "$VZHOSTIF" ]; then
echo "According to $CONFIGFILE VECT$VEID has no veth interface configured."
exit 1
fi
if [ ! -n "$VEIFNAMECTIFNAME" ]; then
echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
exit 1
fi
for IP in $VETH_IP_ADDRESS; do echo "Initializing interface $VZHOSTIF for VECT$VEID." /sbin/$ifconfig $VZHOSTIF 0done
VEROUTEDEVCTROUTEDEV=$VZHOSTIF
if [ -n "$BRIDGEDEV" ]; then
echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
VEROUTEDEVCTROUTEDEV=$BRIDGEDEV /usr/sbin/$brctl addif $BRIDGEDEV $VZHOSTIF
fi
# Up the interface $VEIFNAME CTIFNAME link in VECT$VEID$vzctl exec $VEID $ip link set $VEIFNAME CTIFNAME up
for IP in $VETH_IP_ADDRESS; do
echo "Adding an IP $IP to the $VEIFNAME CTIFNAME for VECT$VEID." $vzctl exec $VEID $ip address add $IP dev $VEIFNAMECTIFNAME
# removing the netmask
IP_STRIP=${IP%%/*};
echo "Adding a route from VE0 CT0 to VECT$VEIDusing $IP_STRIP." $ip route add $IP_STRIP dev $VEROUTEDEVCTROUTEDEV
done
if [ -n "$VE0_IPCT0_IP" ]; then echo "Adding a route from VECT$VEID to VE0CT0." $vzctl exec $VEID $ip route add $VE0_IP CT0_IP dev $VEIFNAMECTIFNAME
fi
if [ -n "$VE_DEFAULT_GATEWAY" ]; then
echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for VECT$VEID."
$vzctl exec $VEID \
$ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAMECTIFNAME
fi
exit 0
</pre>
<p><small>Note: this script can be easily extended to work for multiple triples &lt;bridge, ip address, veth device&gt;, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html </small></p>
 
==== Make the script to be run on a container start ====
In order to run above script on a container start create the file
<code>/etc/vz/vznet.conf</code> with the following contents:
 
EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
 
{{Note|<code>/usr/sbin/vznetcfg.custom</code> should be executable (chmod +x /usr/sbin/vznetcfg.custom)}}
 
{{Note|When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.}}
 
==== Create On-umount script for remove HW → CT route(s) ====
which should be called each time a container with VEID (<code>/etc/vz/conf/$VEID.umount</code>), or any container (<code>/etc/vz/conf/vps.umount</code>) is stopped.
==== Make the script to be run on a VE start ====
In order to run above script on a VE start create the following <code>/etc/vz/vznet.conf</code> file:
<pre>
#!/bin/bash
EXTERNAL_SCRIPT# /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount# a script to remove routes to container with veth-bridge from bridge  CTCONFIGFILE="/usretc/vz/conf/$VEID.confip=/sbin/vznetcfgip. $CTCONFIGFILE if [ ! -n "$VETH_IP_ADDRESS" ]; then exit 0fi if [ ! -n "$BRIDGEDEV" ]; then exit 0fi for IP in $VETH_IP_ADDRESS; do # removing the netmask IP_STRIP=${IP%%/*}; echo "Remove a route from CT0 to CT$VEID using $IP_STRIP.custom" $ip route del $IP_STRIP dev $BRIDGEDEVdone exit 0
</pre>
{{Note|both <code>/etc/vz/vznet.conf</code> and <code>/usr/sbin/vznetcfg.custom</code> should be executable files.}}
{{Note|The script should be executable (chmod +x /etc/vz/conf/vps.umount)}} ==== Setting the route VE -> CT → HN ====To set up a route from VE the CT to the HN , the custom script has to get a HN IP (the $VE0_IP CT0_IP variable in the script). There can be different approaches are several ways to specify it: # Add an entry VE0_IPCT0_IP="VE0 CT0 IP" to the <code>$VEID.conf</code># Add an entry VE0_IPCT0_IP="VE0 CT0 IP" to the <code>/etc/vz/vz.conf</code> (the global configuration config file)# Implement some smart algorithm to determine the VE0 CT0 IP right in the custom network configuration scriptEvery Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).
== (2) An OVZ OpenVZ Hardware Node has two ethernet Ethernet interfaces ==Assume Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from the external traffic.
Let's assign eth0 for the external traffic and eth1 for the local one.
If there is no aim need to make VE the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:* Hardware Node configuration -> [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]* Hardware Node configuration -> [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]
For the VE <-> HN connections availability it's It is nesessary to set an a local IP (local) to the for 'br0'to ensure CT ↔ HN connection availability.
== (3) Putting VEs containers to different subnetworks ==
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VEEdit_the_container.27s_configuration|above configuration]].
== See also ==