Difference between revisions of "Using private IPs for Hardware Nodes"
m (VE->CT, container) |
(use template:legacy) |
||
(17 intermediate revisions by 11 users not shown) | |||
Line 1: | Line 1: | ||
+ | {{Legacy}} | ||
+ | |||
This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology: | This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology: | ||
[[Image:PrivateIPs_fig1.gif|An initial network topology]] | [[Image:PrivateIPs_fig1.gif|An initial network topology]] | ||
+ | |||
+ | == Using a spare IP in the same range == | ||
+ | If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver: | ||
+ | |||
+ | <pre>[HN] ifconfig eth0:1 *.*.*.* | ||
+ | [HN] vzctl set 101 --nameserver *.*.*.*</pre> | ||
== Prerequisites == | == Prerequisites == | ||
Line 18: | Line 26: | ||
=== Hardware Node configuration === | === Hardware Node configuration === | ||
+ | |||
+ | {{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the below commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}} | ||
==== Create a bridge device ==== | ==== Create a bridge device ==== | ||
Line 35: | Line 45: | ||
[HN]# ip route add default via 10.0.0.1 dev br0 | [HN]# ip route add default via 10.0.0.1 dev br0 | ||
− | + | ||
==== A script example ==== | ==== A script example ==== | ||
Line 123: | Line 133: | ||
==== Edit the container's configuration ==== | ==== Edit the container's configuration ==== | ||
Add these parameters to the <code>/etc/vz/conf/$CTID.conf</code> file which will be used during the network configuration: | Add these parameters to the <code>/etc/vz/conf/$CTID.conf</code> file which will be used during the network configuration: | ||
− | + | * Add <code>VETH_IP_ADDRESS="IP/MASK"</code> (a container can have multiple IPs separated by spaces) | |
− | |||
− | * Add <code>VETH_IP_ADDRESS="IP/MASK"</code> (a container can have multiple | ||
− | |||
* Add <code>VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"</code> | * Add <code>VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"</code> | ||
− | * Add <code>BRIDGEDEV="BRIDGE NAME"</code> (a bridge name to which the | + | * Add <code>BRIDGEDEV="BRIDGE NAME"</code> (a bridge name to which the container veth interface should be added) |
− | |||
An example: | An example: | ||
<pre> | <pre> | ||
# Network customization section | # Network customization section | ||
− | |||
VETH_IP_ADDRESS="85.86.87.195/26" | VETH_IP_ADDRESS="85.86.87.195/26" | ||
VE_DEFAULT_GATEWAY="85.86.87.193" | VE_DEFAULT_GATEWAY="85.86.87.193" | ||
Line 159: | Line 164: | ||
for str in $NETIF_OPTIONS; do \ | for str in $NETIF_OPTIONS; do \ | ||
# getting 'ifname' parameter value | # getting 'ifname' parameter value | ||
− | if | + | if echo "$str" | grep -o "^ifname=" ; then |
# remove the parameter name from the string (along with '=') | # remove the parameter name from the string (along with '=') | ||
CTIFNAME=${str#*=}; | CTIFNAME=${str#*=}; | ||
fi | fi | ||
# getting 'host_ifname' parameter value | # getting 'host_ifname' parameter value | ||
− | if | + | if echo "$str" | grep -o "^host_ifname=" ; then |
# remove the parameter name from the string (along with '=') | # remove the parameter name from the string (along with '=') | ||
VZHOSTIF=${str#*=}; | VZHOSTIF=${str#*=}; | ||
Line 206: | Line 211: | ||
IP_STRIP=${IP%%/*}; | IP_STRIP=${IP%%/*}; | ||
− | echo "Adding a route from CT0 to CT$VEID." | + | echo "Adding a route from CT0 to CT$VEID using $IP_STRIP." |
$ip route add $IP_STRIP dev $CTROUTEDEV | $ip route add $IP_STRIP dev $CTROUTEDEV | ||
done | done | ||
Line 223: | Line 228: | ||
exit 0 | exit 0 | ||
</pre> | </pre> | ||
− | <p><small>Note: this script can be easily extended to work for multiple triples <bridge, ip address, veth device>, see http:// | + | <p><small>Note: this script can be easily extended to work for multiple triples <bridge, ip address, veth device>, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html </small></p> |
==== Make the script to be run on a container start ==== | ==== Make the script to be run on a container start ==== | ||
Line 232: | Line 237: | ||
{{Note|<code>/usr/sbin/vznetcfg.custom</code> should be executable (chmod +x /usr/sbin/vznetcfg.custom)}} | {{Note|<code>/usr/sbin/vznetcfg.custom</code> should be executable (chmod +x /usr/sbin/vznetcfg.custom)}} | ||
+ | |||
+ | {{Note|When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.}} | ||
+ | |||
+ | ==== Create On-umount script for remove HW → CT route(s) ==== | ||
+ | which should be called each time a container with VEID (<code>/etc/vz/conf/$VEID.umount</code>), or any container (<code>/etc/vz/conf/vps.umount</code>) is stopped. | ||
+ | |||
+ | <pre> | ||
+ | #!/bin/bash | ||
+ | # /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount | ||
+ | # a script to remove routes to container with veth-bridge from bridge | ||
+ | |||
+ | CTCONFIGFILE=/etc/vz/conf/$VEID.conf | ||
+ | ip=/sbin/ip | ||
+ | . $CTCONFIGFILE | ||
+ | |||
+ | if [ ! -n "$VETH_IP_ADDRESS" ]; then | ||
+ | exit 0 | ||
+ | fi | ||
+ | |||
+ | if [ ! -n "$BRIDGEDEV" ]; then | ||
+ | exit 0 | ||
+ | fi | ||
+ | |||
+ | for IP in $VETH_IP_ADDRESS; do | ||
+ | # removing the netmask | ||
+ | IP_STRIP=${IP%%/*}; | ||
+ | |||
+ | echo "Remove a route from CT0 to CT$VEID using $IP_STRIP." | ||
+ | $ip route del $IP_STRIP dev $BRIDGEDEV | ||
+ | done | ||
+ | |||
+ | exit 0 | ||
+ | </pre> | ||
+ | |||
+ | {{Note|The script should be executable (chmod +x /etc/vz/conf/vps.umount)}} | ||
==== Setting the route CT → HN ==== | ==== Setting the route CT → HN ==== |
Latest revision as of 21:39, 12 September 2016
Warning: This article is applicable for legacy OpenVZ (pre-VZ7) only. For Virtuozzo 7 documentation, see https://docs.openvz.org. |
This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology:
Contents
- 1 Using a spare IP in the same range
- 2 Prerequisites
- 3 An OVZ Hardware Node has the only one Ethernet interface
- 4 An OpenVZ Hardware Node has two Ethernet interfaces
- 5 Putting containers to different subnetworks
- 6 See also
Using a spare IP in the same range[edit]
If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:
[HN] ifconfig eth0:1 *.*.*.* [HN] vzctl set 101 --nameserver *.*.*.*
Prerequisites[edit]
This configuration was tested on a RHEL5 OpenVZ Hardware Node and a container based on a Fedora Core 5 template. Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.
This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.
This article assumes you have already installed OpenVZ, prepared the OS template cache(s) and have container(s) created. If not, follow the links to perform the steps needed.
Note: don't assign an IP after container creation. |
An OVZ Hardware Node has the only one Ethernet interface[edit]
(assume eth0)
Hardware Node configuration[edit]
Warning: if you are configuring the node remotely you must prepare a script with the below commands and run it in background with the redirected output or you'll lose the access to the Node. |
Create a bridge device[edit]
[HN]# brctl addbr br0
Remove an IP from eth0 interface[edit]
[HN]# ifconfig eth0 0
Add eth0 interface into the bridge[edit]
[HN]# brctl addif br0 eth0
Assign the IP to the bridge[edit]
(the same that was assigned on eth0 earlier)
[HN]# ifconfig br0 10.0.0.2/24
Resurrect the default routing[edit]
[HN]# ip route add default via 10.0.0.1 dev br0
A script example[edit]
[HN]# cat /tmp/br_add #!/bin/bash brctl addbr br0 ifconfig eth0 0 brctl addif br0 eth0 ifconfig br0 10.0.0.2/24 ip route add default via 10.0.0.1 dev br0
[HN]# /tmp/br_add >/dev/null 2>&1 &
Container configuration[edit]
Start a container[edit]
[HN]# vzctl start 101
Add a veth interface to the container[edit]
[HN]# vzctl set 101 --netif_add eth0 --save
Set up an IP to the newly created container's veth interface[edit]
[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26
Add the container's veth interface to the bridge[edit]
[HN]# brctl addif br0 veth101.0
Note: There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state. |
Set up the default route for the container[edit]
[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0
(Optional) Add CT↔HN routes[edit]
The above configuration provides the following connections:
- CT X ↔ CT Y (where CT X and CT Y can locate on any OVZ HN)
- CT ↔ Internet
Note that
- The accessability of the CT from the HN depends on the local gateway providing NAT (probably - yes)
- The accessability of the HN from the CT depends on the ISP gateway being aware of the local network (probably not)
So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:
[HN]# ip route add 85.86.87.195 dev br0 [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0
Resulting OpenVZ Node configuration[edit]
Making the configuration persistent[edit]
Set up a bridge on a HN[edit]
This can be done by configuring the ifcfg-*
files located in /etc/sysconfig/network-scripts/
.
Assuming you had a configuration file (e.g. ifcfg-eth0
) like:
DEVICE=eth0 ONBOOT=yes IPADDR=10.0.0.2 NETMASK=255.255.255.0 GATEWAY=10.0.0.1
To automatically create bridge br0
you can create ifcfg-br0
:
DEVICE=br0 TYPE=Bridge ONBOOT=yes IPADDR=10.0.0.2 NETMASK=255.255.255.0 GATEWAY=10.0.0.1
and edit ifcfg-eth0
to add the eth0
interface into the bridge br0
:
DEVICE=eth0 ONBOOT=yes BRIDGE=br0
Edit the container's configuration[edit]
Add these parameters to the /etc/vz/conf/$CTID.conf
file which will be used during the network configuration:
- Add
VETH_IP_ADDRESS="IP/MASK"
(a container can have multiple IPs separated by spaces) - Add
VE_DEFAULT_GATEWAY="CT DEFAULT GATEWAY"
- Add
BRIDGEDEV="BRIDGE NAME"
(a bridge name to which the container veth interface should be added)
An example:
# Network customization section VETH_IP_ADDRESS="85.86.87.195/26" VE_DEFAULT_GATEWAY="85.86.87.193" BRIDGEDEV="br0"
Create a custom network configuration script[edit]
which should be called each time a container is started (e.g. /usr/sbin/vznetcfg.custom
):
#!/bin/bash # /usr/sbin/vznetcfg.custom # a script to bring up bridged network interfaces (veth's) in a container GLOBALCONFIGFILE=/etc/vz/vz.conf CTCONFIGFILE=/etc/vz/conf/$VEID.conf vzctl=/usr/sbin/vzctl brctl=/usr/sbin/brctl ip=/sbin/ip ifconfig=/sbin/ifconfig . $GLOBALCONFIGFILE . $CTCONFIGFILE NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'` for str in $NETIF_OPTIONS; do \ # getting 'ifname' parameter value if echo "$str" | grep -o "^ifname=" ; then # remove the parameter name from the string (along with '=') CTIFNAME=${str#*=}; fi # getting 'host_ifname' parameter value if echo "$str" | grep -o "^host_ifname=" ; then # remove the parameter name from the string (along with '=') VZHOSTIF=${str#*=}; fi done if [ ! -n "$VETH_IP_ADDRESS" ]; then echo "According to $CONFIGFILE CT$VEID has no veth IPs configured." exit 1 fi if [ ! -n "$VZHOSTIF" ]; then echo "According to $CONFIGFILE CT$VEID has no veth interface configured." exit 1 fi if [ ! -n "$CTIFNAME" ]; then echo "Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF." exit 1 fi echo "Initializing interface $VZHOSTIF for CT$VEID." $ifconfig $VZHOSTIF 0 CTROUTEDEV=$VZHOSTIF if [ -n "$BRIDGEDEV" ]; then echo "Adding interface $VZHOSTIF to the bridge $BRIDGEDEV." CTROUTEDEV=$BRIDGEDEV $brctl addif $BRIDGEDEV $VZHOSTIF fi # Up the interface $CTIFNAME link in CT$VEID $vzctl exec $VEID $ip link set $CTIFNAME up for IP in $VETH_IP_ADDRESS; do echo "Adding an IP $IP to the $CTIFNAME for CT$VEID." $vzctl exec $VEID $ip address add $IP dev $CTIFNAME # removing the netmask IP_STRIP=${IP%%/*}; echo "Adding a route from CT0 to CT$VEID using $IP_STRIP." $ip route add $IP_STRIP dev $CTROUTEDEV done if [ -n "$CT0_IP" ]; then echo "Adding a route from CT$VEID to CT0." $vzctl exec $VEID $ip route add $CT0_IP dev $CTIFNAME fi if [ -n "$VE_DEFAULT_GATEWAY" ]; then echo "Setting $VE_DEFAULT_GATEWAY as a default gateway for CT$VEID." $vzctl exec $VEID \ $ip route add default via $VE_DEFAULT_GATEWAY dev $CTIFNAME fi exit 0
Note: this script can be easily extended to work for multiple triples <bridge, ip address, veth device>, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html
Make the script to be run on a container start[edit]
In order to run above script on a container start create the file
/etc/vz/vznet.conf
with the following contents:
EXTERNAL_SCRIPT="/usr/sbin/vznetcfg.custom"
Note: /usr/sbin/vznetcfg.custom should be executable (chmod +x /usr/sbin/vznetcfg.custom)
|
Note: When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this. |
Create On-umount script for remove HW → CT route(s)[edit]
which should be called each time a container with VEID (/etc/vz/conf/$VEID.umount
), or any container (/etc/vz/conf/vps.umount
) is stopped.
#!/bin/bash # /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount # a script to remove routes to container with veth-bridge from bridge CTCONFIGFILE=/etc/vz/conf/$VEID.conf ip=/sbin/ip . $CTCONFIGFILE if [ ! -n "$VETH_IP_ADDRESS" ]; then exit 0 fi if [ ! -n "$BRIDGEDEV" ]; then exit 0 fi for IP in $VETH_IP_ADDRESS; do # removing the netmask IP_STRIP=${IP%%/*}; echo "Remove a route from CT0 to CT$VEID using $IP_STRIP." $ip route del $IP_STRIP dev $BRIDGEDEV done exit 0
Note: The script should be executable (chmod +x /etc/vz/conf/vps.umount) |
Setting the route CT → HN[edit]
To set up a route from the CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:
- Add an entry CT0_IP="CT0 IP" to the
$VEID.conf
- Add an entry CT0_IP="CT0 IP" to the
/etc/vz/vz.conf
(the global configuration config file) - Implement some smart algorithm to determine the CT0 IP right in the custom network configuration script
Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).
An OpenVZ Hardware Node has two Ethernet interfaces[edit]
Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from external traffic. Let's assign eth0 for the external traffic and eth1 for the local one.
If there is no need to make the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:
- Hardware Node configuration → Assign the IP to the bridge
- Hardware Node configuration → Resurrect the default routing
It is nesessary to set a local IP for 'br0' to ensure CT ↔ HN connection availability.
Putting containers to different subnetworks[edit]
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the above configuration.