Latest revision |
Your text |
Line 1: |
Line 1: |
− | {{Legacy}}
| + | This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology: |
− | | |
− | This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology: | |
− | | |
| [[Image:PrivateIPs_fig1.gif|An initial network topology]] | | [[Image:PrivateIPs_fig1.gif|An initial network topology]] |
− |
| |
− | == Using a spare IP in the same range ==
| |
− | If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:
| |
− |
| |
− | <pre>[HN] ifconfig eth0:1 *.*.*.*
| |
− | [HN] vzctl set 101 --nameserver *.*.*.*</pre>
| |
| | | |
| == Prerequisites == | | == Prerequisites == |
− | This configuration was tested on a RHEL5 OpenVZ Hardware Node and a container based on a Fedora Core 5 template.
| + | This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed. |
− | Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.
| + | {{Note|don't assign an IP after VE creation.}} |
− | | + | <br> |
− | This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.
| + | == (1) An OVZ Hardware Node has the only one ethernet interface == |
− | | |
− | This article assumes you have already [[Quick installation|installed OpenVZ]], | |
− | prepared the [[OS template cache]](s) and have | |
− | [[Basic_operations_in_OpenVZ_environment|container(s) created]]. If not, follow the links to perform the steps needed. | |
− | {{Note|don't assign an IP after container creation.}} | |
− | | |
− | == An OVZ Hardware Node has the only one Ethernet interface == | |
| (assume eth0) | | (assume eth0) |
| | | |
− | === Hardware Node configuration === | + | === <u>Hardware Node configuration</u> === |
− | | |
− | {{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the below commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}
| |
| | | |
| ==== Create a bridge device ==== | | ==== Create a bridge device ==== |
− | [HN]# brctl addbr br0
| + | <pre>[HN]# brctl addbr br0</pre> |
| | | |
| ==== Remove an IP from eth0 interface ==== | | ==== Remove an IP from eth0 interface ==== |
− | [HN]# ifconfig eth0 0
| + | <pre>[HN]# ifconfig eth0 0</pre> |
| | | |
| ==== Add eth0 interface into the bridge ==== | | ==== Add eth0 interface into the bridge ==== |
− | [HN]# brctl addif br0 eth0
| + | <pre>[HN]# brctl addif br0 eth0</pre> |
| | | |
| ==== Assign the IP to the bridge ==== | | ==== Assign the IP to the bridge ==== |
| (the same that was assigned on eth0 earlier) | | (the same that was assigned on eth0 earlier) |
− | [HN]# ifconfig br0 10.0.0.2/24
| + | <pre>[HN]# ifconfig br0 10.0.0.2/24</pre> |
| | | |
| ==== Resurrect the default routing ==== | | ==== Resurrect the default routing ==== |
− | [HN]# ip route add default via 10.0.0.1 dev br0
| + | <pre>[HN]# ip route add default via 10.0.0.1 dev br0</pre> |
| | | |
− | | + | {{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}} |
| | | |
| ==== A script example ==== | | ==== A script example ==== |
Line 59: |
Line 41: |
| </pre> | | </pre> |
| | | |
− | [HN]# /tmp/br_add >/dev/null 2>&1 &
| + | <pre>[HN]# /tmp/br_add >/dev/null 2>&1 &</pre> |
| + | <br> |
| + | === <u>VE configuration</u> === |
| | | |
− | === Container configuration === | + | ==== Start a VE ==== |
| + | <pre>[HN]# vzctl start 101</pre> |
| | | |
− | ==== Start a container ==== | + | ==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ==== |
− | [HN]# vzctl start 101
| + | <pre>[HN]# vzctl set 101 --netif_add eth0 –save</pre> |
| | | |
− | ==== Add a [[Virtual_Ethernet_device|veth interface]] to the container ====
| + | ==== Set up an IP to the newly created VE's veth interface ==== |
− | [HN]# vzctl set 101 --netif_add eth0 --save
| + | <pre>[HN]# vzctl exec 101 ifconfig eth0 85.86.87.194/26</pre> |
− | | |
− | ==== Set up an IP to the newly created container's veth interface ==== | |
− | [HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26
| |
| | | |
− | ==== Add the container's veth interface to the bridge ====
| + | ==== Set up the default route for the VE ==== |
− | [HN]# brctl addif br0 veth101.0
| + | <pre>[HN]# vzctl exec 101 ip route change default via 85.86.87.192 dev eth0</pre> |
− | | |
− | {{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.
| |
− | <!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ -->}}
| |
− | | |
− | ==== Set up the default route for the container ==== | |
− | [HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0
| |
| | | |
− | ==== (Optional) Add CT↔HN routes ==== | + | ==== Add the VE's veth interface to the bridge ==== |
− | The above configuration provides the following connections:
| + | <pre>[HN]# brctl addif br0 veth101.0</pre> |
− | * CT X ↔ CT Y (where CT X and CT Y can locate on any OVZ HN)
| |
− | * CT ↔ Internet
| |
− | | |
− | Note that
| |
− | | |
− | * The accessability of the CT from the HN depends on the local gateway providing NAT (probably - yes)
| |
− | | |
− | * The accessability of the HN from the CT depends on the ISP gateway being aware of the local network (probably not)
| |
− | | |
− | So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:
| |
− | | |
− | [HN]# ip route add 85.86.87.195 dev br0
| |
− | [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0
| |
− | | |
− | === Resulting OpenVZ Node configuration ===
| |
− | [[Image:PrivateIPs_fig2.gif|Resulting OpenVZ Node configuration]]
| |
| | | |
− | === Making the configuration persistent === | + | ==== (Optional) Make HN(s) to be accessible from a VE ==== |
− | | + | The configuration above provides following connections available: |
− | ==== Set up a bridge on a HN ====
| |
− | This can be done by configuring the <code>ifcfg-*</code> files located in <code>/etc/sysconfig/network-scripts/</code>.
| |
− | | |
− | Assuming you had a configuration file (e.g. <code>ifcfg-eth0</code>) like:
| |
| <pre> | | <pre> |
− | DEVICE=eth0
| + | VE X <-> VE Y (where VE X and VE Y can locate on any OVZ HN) |
− | ONBOOT=yes
| + | VE <-> Internet |
− | IPADDR=10.0.0.2
| + | HN -> VE |
− | NETMASK=255.255.255.0
| |
− | GATEWAY=10.0.0.1
| |
| </pre> | | </pre> |
| | | |
− | To automatically create bridge <code>br0</code> you can create <code>ifcfg-br0</code>:
| + | If you really need a VE to have an access to the HN(s) add an additional route rule: |
− | <pre>
| + | <pre>[HN]# vzctl exec 101 ip route add 10.0.0.0/24 dev eth0</pre> |
− | DEVICE=br0
| |
− | TYPE=Bridge
| |
− | ONBOOT=yes
| |
− | IPADDR=10.0.0.2
| |
− | NETMASK=255.255.255.0
| |
− | GATEWAY=10.0.0.1
| |
− | </pre>
| |
− | | |
− | and edit <code>ifcfg-eth0</code> to add the <code>eth0</code> interface into the bridge <code>br0</code>:
| |
− | <pre>
| |
− | DEVICE=eth0
| |
− | ONBOOT=yes
| |
− | BRIDGE=br0
| |
− | </pre>
| |
− | | |
− | ==== Edit the container's configuration ====
| |
− | Add these parameters to the <code>/etc/vz/conf/$CTID.conf</code> file which will be used during the network configuration:
| |
− | * Add <code>VETH_IP_ADDRESS="IP/MASK"</code> (a container can have multiple IPs separated by spaces)
| |
− | * Add <code>VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"</code>
| |
− | * Add <code>BRIDGEDEV="BRIDGE NAME"</code> (a bridge name to which the container veth interface should be added)
| |
− | | |
− | An example:
| |
− | <pre>
| |
− | # Network customization section
| |
− | VETH_IP_ADDRESS="85.86.87.195/26"
| |
− | VE_DEFAULT_GATEWAY="85.86.87.193"
| |
− | BRIDGEDEV="br0"
| |
− | </pre>
| |
− | | |
− | ==== Create a custom network configuration script ====
| |
− | which should be called each time a container is started (e.g. <code>/usr/sbin/vznetcfg.custom</code>):
| |
− | <pre> | |
− | #!/bin/bash
| |
− | # /usr/sbin/vznetcfg.custom
| |
− | # a script to bring up bridged network interfaces (veth's) in a container
| |
− | | |
− | GLOBALCONFIGFILE=/etc/vz/vz.conf
| |
− | CTCONFIGFILE=/etc/vz/conf/$VEID.conf
| |
− | vzctl=/usr/sbin/vzctl
| |
− | brctl=/usr/sbin/brctl
| |
− | ip=/sbin/ip
| |
− | ifconfig=/sbin/ifconfig
| |
− | . $GLOBALCONFIGFILE
| |
− | . $CTCONFIGFILE
| |
− | | |
− | NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`
| |
− | for str in $NETIF_OPTIONS; do \
| |
− | # getting 'ifname' parameter value
| |
− | if echo "$str" | grep -o "^ifname=" ; then
| |
− | # remove the parameter name from the string (along with '=')
| |
− | CTIFNAME=${str#*=};
| |
− | fi
| |
− | # getting 'host_ifname' parameter value
| |
− | if echo "$str" | grep -o "^host_ifname=" ; then
| |
− | # remove the parameter name from the string (along with '=')
| |
− | VZHOSTIF=${str#*=};
| |
− | fi
| |
− | done
| |
− | | |
− | if [ ! -n "$VETH_IP_ADDRESS" ]; then
| |
− | echo "According to $CONFIGFILE CT$VEID has no veth IPs configured."
| |
− | exit 1
| |
− | fi
| |
− | | |
− | if [ ! -n "$VZHOSTIF" ]; then
| |
− | echo "According to $CONFIGFILE CT$VEID has no veth interface configured."
| |
− | exit 1
| |
− | fi
| |
− | | |
− | if [ ! -n "$CTIFNAME" ]; then
| |
− | echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF."
| |
− | exit 1
| |
− | fi
| |
− | | |
− | echo "Initializing interface $VZHOSTIF for CT$VEID."
| |
− | $ifconfig $VZHOSTIF 0
| |
− | | |
− | CTROUTEDEV=$VZHOSTIF
| |
− | | |
− | if [ -n "$BRIDGEDEV" ]; then
| |
− | echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV."
| |
− | CTROUTEDEV=$BRIDGEDEV
| |
− | $brctl addif $BRIDGEDEV $VZHOSTIF
| |
− | fi
| |
− | | |
− | # Up the interface $CTIFNAME link in CT$VEID
| |
− | $vzctl exec $VEID $ip link set $CTIFNAME up
| |
− | | |
− | for IP in $VETH_IP_ADDRESS; do
| |
− | echo "Adding an IP $IP to the $CTIFNAME for CT$VEID."
| |
− | $vzctl exec $VEID $ip address add $IP dev $CTIFNAME
| |
− | | |
− | # removing the netmask
| |
− | IP_STRIP=${IP%%/*};
| |
− | | |
− | echo "Adding a route from CT0 to CT$VEID using $IP_STRIP."
| |
− | $ip route add $IP_STRIP dev $CTROUTEDEV
| |
− | done
| |
− | | |
− | if [ -n "$CT0_IP" ]; then
| |
− | echo "Adding a route from CT$VEID to CT0."
| |
− | $vzctl exec $VEID $ip route add $CT0_IP dev $CTIFNAME
| |
− | fi
| |
− | | |
− | if [ -n "$VE_DEFAULT_GATEWAY" ]; then
| |
− | echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for CT$VEID."
| |
− | $vzctl exec $VEID \
| |
− | $ip route add default via $VE_DEFAULT_GATEWAY dev $CTIFNAME
| |
− | fi
| |
− | | |
− | exit 0
| |
− | </pre>
| |
− | <p><small>Note: this script can be easily extended to work for multiple triples <bridge, ip address, veth device>, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html </small></p>
| |
− | | |
− | ==== Make the script to be run on a container start ====
| |
− | In order to run above script on a container start create the file
| |
− | <code>/etc/vz/vznet.conf</code> with the following contents:
| |
− | | |
− | EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
| |
− | | |
− | {{Note|<code>/usr/sbin/vznetcfg.custom</code> should be executable (chmod +x /usr/sbin/vznetcfg.custom)}}
| |
− | | |
− | {{Note|When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.}}
| |
− | | |
− | ==== Create On-umount script for remove HW → CT route(s) ====
| |
− | which should be called each time a container with VEID (<code>/etc/vz/conf/$VEID.umount</code>), or any container (<code>/etc/vz/conf/vps.umount</code>) is stopped.
| |
− | | |
− | <pre>
| |
− | #!/bin/bash
| |
− | # /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount
| |
− | # a script to remove routes to container with veth-bridge from bridge
| |
− | | |
− | CTCONFIGFILE=/etc/vz/conf/$VEID.conf
| |
− | ip=/sbin/ip
| |
− | . $CTCONFIGFILE
| |
− | | |
− | if [ ! -n "$VETH_IP_ADDRESS" ]; then
| |
− | exit 0
| |
− | fi
| |
− | | |
− | if [ ! -n "$BRIDGEDEV" ]; then
| |
− | exit 0
| |
− | fi
| |
− | | |
− | for IP in $VETH_IP_ADDRESS; do
| |
− | # removing the netmask
| |
− | IP_STRIP=${IP%%/*};
| |
− |
| |
− | echo "Remove a route from CT0 to CT$VEID using $IP_STRIP."
| |
− | $ip route del $IP_STRIP dev $BRIDGEDEV
| |
− | done
| |
− | | |
− | exit 0
| |
− | </pre> | |
− | | |
− | {{Note|The script should be executable (chmod +x /etc/vz/conf/vps.umount)}}
| |
− | | |
− | ==== Setting the route CT → HN ====
| |
− | To set up a route from the CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:
| |
− | | |
− | # Add an entry CT0_IP="CT0 IP" to the <code>$VEID.conf</code>
| |
− | # Add an entry CT0_IP="CT0 IP" to the <code>/etc/vz/vz.conf</code> (the global configuration config file)
| |
− | # Implement some smart algorithm to determine the CT0 IP right in the custom network configuration script
| |
− | | |
− | Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).
| |
− | | |
− | == An OpenVZ Hardware Node has two Ethernet interfaces ==
| |
− | Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from external traffic.
| |
− | Let's assign eth0 for the external traffic and eth1 for the local one.
| |
| | | |
− | If there is no need to make the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:
| + | === <u>The resulted OVZ Node configuration</u> === |
− | * Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]
| + | [[Image:PrivateIPs_fig2.gif|The resulted OVZ Node configuration]] |
− | * Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]
| |
| | | |
− | It is nesessary to set a local IP for 'br0' to ensure CT ↔ HN connection availability.
| + | === <u>Making the configuration persistent</u> (TODO) === |
| + | A Hardware Node configuration can be done with help of ordinary initscripts configuration i suppose,<br> |
| + | while VEs configuration will require creating additional script based on [[Virtual_Ethernet_device#Making_a_veth-device_persistent|Making a veth-device persistent]] scheme. |
| + | <br> |
| | | |
− | == Putting containers to different subnetworks == | + | == (2) An OVZ Hardware Node has two ethernet interfaces (TODO) == |
− | It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the
| + | (assume eth0 and eth1) |
− | [[Using_private_IPs_for_Hardware_Nodes#Edit_the_container.27s_configuration|above configuration]].
| |
| | | |
− | == See also ==
| |
− | * [[Virtual network device]]
| |
− | * [[Differences between venet and veth]]
| |
| | | |
| [[Category: HOWTO]] | | [[Category: HOWTO]] |
| [[Category: Networking]] | | [[Category: Networking]] |