14
edits
Changes
Added GATEWAY to ifcfg-eth0 example.
This describes a method of setting up networking for a host and its VEs such that the networking configuration for the VEs can be configured exactly as if the VEs were standalone hosts of their own in the same subnets or VLAN as the host. This method makes use of the Virtual Ethernet device and bridges between the host and its containers. This technique has the advantage of allowing IPv6 network configurations to work on both VEs and hosts as they normally would. In particular, both hosts and VEs can use IPv6 autoconfiguration. The network configuration of a VE can be identical to that of a non-VE system.
In the following example the host has two physical interfaces and we are setting up the network configuration for VE 100. The host IP configuration is moved out of the ethN interface configs and into the vzbrN brN interface config scripts (ifcfg-vzbr0 br0 and ifcfg-vzbr1br1). Ie. the host IP configuration will now reside on the vzbrN brN interfaces instead of the ethN interfaces. The example also assumes IPv4 is configured statically, whereas IPv6 is auto-configured.
==Configure host bridge interfaces==
Steps 1 through 4 are done only once on the host.
1. (Optional) Verify that you can create a bridge interfaces for each physical interface on the host.
/usr/sbin/brctl addbr vzbr0br0 /usr/sbin/brctl addbr vzbr1br1
If the above commands do not work you may need to install the bridge-utils package.
2. Make note of the existing IP configuration in the hosts ifcfg-ethN files. Also, record the hardware MAC addresses of the ethernet interfaces from the output of 'ifconfig'. /sbin/ifconfig eth0 /sbin/ifconfig eth1 Then, modify the ifcfg-ethN files on the host so that they ONLY bridge to the corresponding vzbrN brN interface. /etc/sysconfig/network-scripts/ifcfg-eth0 should look like:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
BRIDGE=vzbr0br0
Similarly ifcfg-eth1 will look like:
BOOTPROTO=none
ONBOOT=yes
BRIDGE=vzbr1br1
Note that the ifcfg-ethN files on the host do not contain any IP information anymore.
3. Create ifcfg-vzbrN brN files and copy the IP configuration that was previously in the ifcfg-ethN files into ifcfg-vzbrNbrN. Here's what host:/etc/sysconfig/network-scripts/ifcfg-vzbr0 br0 would look like assuming the IPv4 address is assigned statically and IPv6 auto-configuration (SLAAC) is used:
DEVICE=vzbr0br0
BOOTPROTO=static
IPADDR=xxx.xxx.xxx.xxx
NETMASK=aaa.aaa.aaa.aaa
ONBOOT=yes
TYPE=bridgeBridge MACADDR=mm:mm:mm:mm:mm:mm
Similarly, ifcfg-vzbr1 br1 should look like:
DEVICE=vzbr1br1
BOOTPROTO=static
IPADDR=yyy.yyy.yyy.yyy
NETMASK=bbb.bbb.bbb.bbb
ONBOOT=yes
TYPE=bridgeBridge MACADDR=nn:nn:nn:nn:nn:nn
4. On the host, do a 'service network restart' and verify the host has both IPv4 and IPv6 connectivity to its brN interfaces. ==Create the VE veth interfaces==5. Create the VE as you normally would, except do NOT specify any IP address, just the hostname. Specifying an IP address during VE creation creates an unwanted venet interface which is not used in this configuration. /usr/sbin/vzctl create 100 --ostemplate name --hostname name However, if the VE already exists, use vzctl to remove any venet devices - they will not be used:
/usr/sbin/vzctl set 100 --ipdel all --save
6. For each VE, create ethN devices (ignore warnings about "Container does not have configured veth") on the host:
/usr/sbin/vzctl set 100 --netif_add eth0--save /usr/sbin/vzctl set 100 --netif_add eth1--save
The above creates corresponding veth100.0 and veth100.1 devices on the host and updates the host /etc/vz/conf/100.conf file with generated MAC addresses for the veth devices. When the VE is started, the veth100.0 and veth100.1 devices will be automatically created on the host.
==Bridge the host and VE==
7. Next we add the host vethN interfaces to the host bridged interfaces (vzbrNbrN).
Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.0
DEVICE=veth100.0
ONBOOT=yesno BRIDGE=vzbr0br0
Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.1
DEVICE=veth100.1
ONBOOT=yesno BRIDGE=vzbr1br1 To make the above take effect, either start the VE, /usr/sbin/vzctl start 100
/usr/sbin/brctl addif vzbr0 br0 veth100.0 /usr/sbin/brctl addif vzbr1 br1 veth100.1
8. Verify each bridge includes the host interface and the veth interfaces for each VE:
==Configure the VE networking==
9. In Enter the container create VE from the ifcfg network scripts for each interface eth0 and eth1. The scripts should look like standard ifcfg network scripts for a host.:
/usr/sbin/vzctl enter 100
vi /etc/sysconfig/network-scripts/ifcfg-eth0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
As noted above, the ifcfg-ethN files in the VE should be created to be identical to standard ifcfg-eth* files from a non-virtualized host. A minimum ifcfg-eth0 file using a static IPv4 address would have the following entries: DEVICE=eth0 BOOTPROTO=static IPADDR=xxx.xxx.xxx.xxx NETMASK=yyy.yyy.yyy.yyy ONBOOT=yes GATEWAY=zzz.zzz.zzz.zzz
10. Initialize the interfaces and restart the network service on the container.
Alternatively, just restart the VE from the host.
NOTE: Due to bug [http://bugzilla.openvz.org/show_bug.cgi?id==Additional VEs==131723 1723] this setup might not work: Enabling the routing on CT0 can effectively kill all IPv6 connectivity for the CT, depending on the setup. For each additional VE(This bug is reported to be solved since 2011-06-07, start at step #5so this shouldn't be an issue anymore.)
==See also==
* [[IPv6]]
* [[Virtual Ethernet device]]
* [[Differences between venet and veth]]