Difference between revisions of "VEs and HNs in same subnets"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(See also)
Line 125: Line 125:
  
 
== See also ==
 
== See also ==
* [[Virtual network device]]
+
* [[IPv6]]
 
* [[Differences between venet and veth]]
 
* [[Differences between venet and veth]]

Revision as of 02:45, 25 January 2010

This describes a method of setting up networking for a host and its VEs such that the networking configuration for the VEs can be configured exactly as if the VEs were standalone hosts of their own in the same subnets or VLAN as the host. This method makes use of the Virtual Ethernet device and bridges between the host and its containers. This technique has the advantage of allowing IPv6 network configurations to work on both VEs and hosts as they normally would. In particular, both hosts and VEs can use IPv6 autoconfiguration. The network configuration of a VE can be identical to that of a non-VE system.

In the following example the host has two physical interfaces and we are setting up the network configuration for VE 100. The host IP configuration is moved out of the ethN interface configs and into the vzbrN interface config scripts (ifcfg-vzbr0 and ifcfg-vzbr1). Ie. the host IP configuration will now reside on the vzbrN interfaces instead of the ethN interfaces.

1. (Optional) Verify that you can create a bridge interfaces for each physical interface on the host.

       /usr/sbin/brctl addbr vzbr0
       /usr/sbin/brctl addbr vzbr1

If the above commands do not work you may need to install the bridge-utils package.

2. Make note of the existing IP configuration in the hosts ifcfg-ethN files. Then, modify the ifcfg-ethN files on the host so that they ONLY bridge to the corresponding vzbrN interface. /etc/sysconfig/network-scripts/ifcfg-eth0 should look like:

       DEVICE=eth0
       BOOTPROTO=none
       ONBOOT=yes
       BRIDGE=vzbr0

Similarly ifcfg-eth1 will look like:

       DEVICE=eth1
       BOOTPROTO=none
       ONBOOT=yes
       BRIDGE=vzbr1

Note that the ifcfg-ethN files on the host do not contain any IP information anymore.

3. Create ifcfg-vzbrN files and copy the IP configuration that was previously in the ifcfg-ethN files into ifcfg-vzbrN. Here's what host:/etc/sysconfig/network-scripts/ifcfg-vzbr0 would look like assuming the IPv4 address is assigned statically and IPv6 auto-configuration (SLAAC) is used:

       DEVICE=vzbr0
       BOOTPROTO=static
       IPADDR=xxx.xxx.xxx.xxx
       NETMASK=aaa.aaa.aaa.aaa
       ONBOOT=yes
       TYPE=bridge

Similarly, ifcfg-vzbr1 should look like:

       DEVICE=vzbr1
       BOOTPROTO=static
       IPADDR=yyy.yyy.yyy.yyy
       NETMASK=bbb.bbb.bbb.bbb
       ONBOOT=yes
       TYPE=bridge

4. On the host, do a 'service network restart' and verify the host has both IPv4 and IPv6 connectivity to its vzbrN interfaces.

5. Create the VE as you normally would except do NOT specify any IP address, just the hostname. Specifying an IP address during VE creation creates an unwanted venet interface which is not used in this configuration.

However, if the VE already exists, remove any venet devices - they will not be used:

       /usr/sbin/vzctl set 100 --ipdel all --save

6. For each VE, create ethN devices (ignore warnings about "Container does not have configured veth") on the host:

       /usr/sbin/vzctl set 100 --netif_add eth0
       /usr/sbin/vzctl set 100 --netif_add eth1

The above creates corresponding veth100.0 and veth100.1 devices on the host and updates the host /etc/vz/conf/100.conf file with generated MAC addresses for the veth devices.

7. Next we add the host vethN interfaces to the host bridged interfaces (vzbrN).

Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.0

       DEVICE=veth100.0
       ONBOOT=yes
       BRIDGE=vzbr0

Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.1

       DEVICE=veth100.1
       ONBOOT=yes
       BRIDGE=vzbr1

To make the above take effect, either do another 'service network restart' on the host, or manually add each VE interface to its corresponding bridge by running:

       /usr/sbin/brctl addif vzbr0 veth100.0
       /usr/sbin/brctl addif vzbr1 veth100.1

8. Verify each bridge includes the host interface and the veth interfaces for each VE:

       /usr/sbin/brctl show

9. In the container create the ifcfg network scripts for each interface eth0 and eth1. The scripts should look like standard ifcfg network scripts for a host.

       /usr/sbin/vzctl enter 100

After entering the VE:

       vi /etc/sysconfig/network-scripts/ifcfg-eth0
       vi /etc/sysconfig/network-scripts/ifcfg-eth1

As noted above, the ifcfg-ethN files in the VE should be created to be identical to standard ifcfg-eth* files from a non-virtualized host.

10. Initialize the interfaces and restart the network service on the container.

       /sbin/ifconfig eth0 0
       /sbin/ifconfig eth1 0
       /sbin/service network restart

Alternatively, just restart the VE from the host.

11. Add FORWARD ACCEPT statements to the host iptables and ip6tables for each VE IPv4 and IPv6 address. You do NOT need to enable any special network forwarding via sysctl.

iptables:

       -A FORWARD -s xxx.xxx.xxx.xxx -j ACCEPT
       -A FORWARD -d xxx.xxx.xxx.xxx -j ACCEPT

ip6tables:

       -A FORWARD -s xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx -j ACCEPT
       -A FORWARD -d xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx -j ACCEPT

Then restart both iptables and ip6tables on the host:

       service iptables restart
       service ip6tables restart

The VE iptables and ip6tables configuration can be treated as fully independent of the host iptables and ip6tables configuration.

12. Verify the host and VE have connectivity to each other as well as to the rest of the network.

13. For each additional VE, start at step #5.

See also