Changes

Jump to: navigation, search

Using private IPs for Hardware Nodes

1,372 bytes removed, 09:24, 21 October 2011
Container configuration
[HN]# /tmp/br_add >/dev/null 2>&1 &
=== Container configuration === ==== Start a container ==== [HN]# vzctl start 101 ==== Add a [[Virtual_Ethernet_device|veth interface]] Way to use the container ==== [HN]# vzctl set 101 --netif_add eth0 --save ==== Set up an IP internet to the newly created container's veth interface ==== [HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26 ==== Add the container's veth interface to the bridge ==== [HN]# brctl addif br0 veth101.0 {{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.<help ppeloe solve problems!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ -->}} ==== Set up the default route for the container ==== [HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0 ==== (Optional) Add CT↔HN routes ====The above configuration provides the following connections:* CT X ↔ CT Y (where CT X and CT Y can locate on any OVZ HN)* CT ↔ Internet Note that * The accessability of the CT from the HN depends on the local gateway providing NAT (probably - yes) * The accessability of the HN from the CT depends on the ISP gateway being aware of the local network (probably not) So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:  [HN]# ip route add 85.86.87.195 dev br0 [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0
=== Resulting OpenVZ Node configuration ===
Anonymous user

Navigation menu