Difference between revisions of "VEs and HNs in same subnets"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(New page: This describes a method of setting up networking for a host and its VEs such that the networking configuration for the VEs can be configured exactly as if the VEs were standalone hosts of ...)
 
(Added GATEWAY to ifcfg-eth0 example.)
 
(37 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This describes a method of setting up networking for a host and its VEs
+
[[Category:HOWTO]]
such that the networking configuration for the VEs can be configured
+
[[Category:Networking]]
exactly as if the VEs were standalone hosts of their own in the same
+
This describes a method of setting up networking for a host and its VEs such that the networking configuration for the VEs can be configured exactly as if the VEs were standalone hosts of their own in the same subnets or VLAN as the host.  This method makes use of the Virtual Ethernet device and bridges between the host and its containers.  This technique has the advantage of allowing IPv6 network configurations to work on both VEs and hosts as they normally would.  In particular, both hosts and VEs can use IPv6 autoconfiguration.  The network configuration of a VE can be identical to that of a non-VE system.
subnets or VLAN as the host.  This method makes use of the Virtual
 
Ethernet device and bridges between the host and its containers.  This
 
technique has the advantage of allowing IPv6 network configurations to
 
work on VEs and hosts as they normally would.  In particular, both hosts
 
and VEs can use IPv6 autoconfiguration.  The network configuration of a VE
 
can be identical to that of a non-VE system.
 
  
In the following example the host has two physical interfaces and we are
+
In the following example the host has two physical interfaces and we are setting up the network configuration for VE 100.  The host IP configuration is moved out of the ethN interface configs and into the brN interface config scripts (ifcfg-br0 and ifcfg-br1).  Ie. the host IP configuration will now reside on the brN interfaces instead of the ethN interfaces.  The example also assumes IPv4 is configured statically, whereas IPv6 is auto-configured.
setting up the network configuration for VE 100.  The host IP
 
configuration is moved out of the ethN interface configs and into the
 
vzbrN interface config scripts (ifcfg-vzbr0 and ifcfg-vzbr1).  Ie. the
 
host IP configuration will now reside on the vzbrN interfaces instead of
 
the ethN interfaces.
 
  
1.  (Optional) Verify that you can create a bridge interfaces for each
+
==Configure host bridge interfaces==
physical interface on the host.
 
  
        /usr/sbin/brctl addbr vzbr0
+
Steps 1 through 4 are done only once on the host.
        /usr/sbin/brctl addbr vzbr1
 
  
2.  Make note of the existing IP configuration in the hosts ifcfg-ethN
+
1.  (Optional) Verify that you can create a bridge interfaces for each physical interface on the host.
files.  Then, modify the ifcfg-ethN files on the host so that they ONLY
+
 
bridge to the corresponding vzbrN interface.   
+
        /usr/sbin/brctl addbr br0
/etc/sysconfig/network-scripts/ifcfg-eth0 should look like:
+
        /usr/sbin/brctl addbr br1
 +
 
 +
If the above commands do not work you may need to install the bridge-utils package.
 +
 
 +
2.  Make note of the existing IP configuration in the hosts ifcfg-ethN files.  Also, record the hardware MAC addresses of the ethernet interfaces from the output of 'ifconfig'.
 +
 
 +
        /sbin/ifconfig eth0
 +
        /sbin/ifconfig eth1
 +
 
 +
Then, modify the ifcfg-ethN files on the host so that they ONLY bridge to the corresponding brN interface.  /etc/sysconfig/network-scripts/ifcfg-eth0 should look like:
  
 
         DEVICE=eth0
 
         DEVICE=eth0
 
         BOOTPROTO=none
 
         BOOTPROTO=none
 
         ONBOOT=yes
 
         ONBOOT=yes
         BRIDGE=vzbr0
+
         BRIDGE=br0
  
 
Similarly ifcfg-eth1 will look like:
 
Similarly ifcfg-eth1 will look like:
Line 37: Line 33:
 
         BOOTPROTO=none
 
         BOOTPROTO=none
 
         ONBOOT=yes
 
         ONBOOT=yes
         BRIDGE=vzbr1
+
         BRIDGE=br1
  
Note that the ifcfg-ethN files on the host do not contain any IP
+
Note that the ifcfg-ethN files on the host do not contain any IP information anymore.
information anymore.
 
  
3.  Create ifcfg-vzbrN files and copy the IP configuration that was
+
3.  Create ifcfg-brN files and copy the IP configuration that was previously in the ifcfg-ethN files into ifcfg-brN.  Here's what host:/etc/sysconfig/network-scripts/ifcfg-br0 would look like assuming the IPv4 address is assigned statically and IPv6 auto-configuration (SLAAC) is used:
previously in the ifcfg-ethN files into ifcfg-vzbrN.  Here's what
 
host:/etc/sysconfig/network-scripts/ifcfg-vzbr0 would look like assuming
 
the IPv4 address is assigned statically and IPv6 auto-configuration
 
(SLAAC) is used:
 
  
         DEVICE=vzbr0
+
         DEVICE=br0
 
         BOOTPROTO=static
 
         BOOTPROTO=static
 
         IPADDR=xxx.xxx.xxx.xxx
 
         IPADDR=xxx.xxx.xxx.xxx
 
         NETMASK=aaa.aaa.aaa.aaa
 
         NETMASK=aaa.aaa.aaa.aaa
 
         ONBOOT=yes
 
         ONBOOT=yes
         TYPE=bridge
+
         TYPE=Bridge
 +
        MACADDR=mm:mm:mm:mm:mm:mm
  
Similarly, ifcfg-vzbr1 should look like:
+
Similarly, ifcfg-br1 should look like:
  
         DEVICE=vzbr1
+
         DEVICE=br1
 
         BOOTPROTO=static
 
         BOOTPROTO=static
 
         IPADDR=yyy.yyy.yyy.yyy
 
         IPADDR=yyy.yyy.yyy.yyy
 
         NETMASK=bbb.bbb.bbb.bbb
 
         NETMASK=bbb.bbb.bbb.bbb
 
         ONBOOT=yes
 
         ONBOOT=yes
         TYPE=bridge
+
         TYPE=Bridge
 +
        MACADDR=nn:nn:nn:nn:nn:nn
 +
 
 +
Note that TYPE 'Bridge' is case-sensitive.  Otherwise, the bridge interfaces will not initialize correctly during boot.
  
4On the host, do a 'service network restart' and verify the host has
+
The bridge MACADDR should be hard-coded to match the corresponding hardware MAC address of the ethernet interfaceOtherwise the default behaviour is to use the lowest MAC address of all the interfaces in the bridge.  This is to prevent the bridge MAC and any auto-configured IPv6 address on the bridge interface from changing as VEs are created, started, or stopped.
both IPv4 and IPv6 connectivity to its vzbrN interfaces.
 
  
5Create the VE as you normally would except do NOT specify any IP
+
4On the host, do a 'service network restart' and verify the host has both IPv4 and IPv6 connectivity to its brN interfaces.
address, just the hostname.  Specifying an IP address during VE creation
 
creates an unwanted venet interface which is not used in this
 
configuration.
 
  
However, if the VE already exists, remove any venet devices - they will  
+
==Create the VE veth interfaces==
not be used:
+
5.  Create the VE as you normally would, except do NOT specify any IP address, just the hostname.  Specifying an IP address during VE creation creates an unwanted venet interface which is not used in this configuration.
 +
 
 +
        /usr/sbin/vzctl create 100 --ostemplate name --hostname name
 +
 
 +
However, if the VE already exists, use vzctl to remove any venet devices - they will not be used:
  
 
         /usr/sbin/vzctl set 100 --ipdel all --save
 
         /usr/sbin/vzctl set 100 --ipdel all --save
  
6.  For each VE, create ethN devices (ignore warnings about "Container
+
6.  For each VE, create ethN devices (ignore warnings about "Container does not have configured veth") on the host:
does not have configured veth") on the host:
 
  
         /usr/sbin/vzctl set 100 --netif_add eth0
+
         /usr/sbin/vzctl set 100 --netif_add eth0 --save
         /usr/sbin/vzctl set 100 --netif_add eth1
+
         /usr/sbin/vzctl set 100 --netif_add eth1 --save
  
The above creates corresponding veth100.0 and veth100.1 devices on the
+
The above updates the host /etc/vz/conf/100.conf file with generated MAC addresses for the veth devices.  When the VE is started, the veth100.0 and veth100.1 devices will be automatically created on the host.
host and updates the host /etc/vz/conf/100.conf file with generated MAC
 
addresses for the veth devices.
 
  
7.  Next we add the host vethN interfaces to the host bridged
+
==Bridge the host and VE==
interfaces (vzbrN).
+
7.  Next we add the host vethN interfaces to the host bridged interfaces (brN).
  
 
Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.0
 
Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.0
  
 
         DEVICE=veth100.0
 
         DEVICE=veth100.0
         ONBOOT=yes
+
         ONBOOT=no
         BRIDGE=vzbr0
+
         BRIDGE=br0
  
 
Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.1
 
Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.1
  
 
         DEVICE=veth100.1
 
         DEVICE=veth100.1
         ONBOOT=yes
+
         ONBOOT=no
         BRIDGE=vzbr1
+
         BRIDGE=br1
 +
 
 +
To make the above take effect, either start the VE,
 +
 
 +
        /usr/sbin/vzctl start 100
  
To make the above take effect, either do another 'service network restart'
+
Or if it's already started then manually add each VE interface to its corresponding bridge using:
on the host, or manually add each VE interface to its corresponding bridge
 
by running:
 
  
         /usr/sbin/brctl addif vzbr0 veth100.0
+
         /usr/sbin/brctl addif br0 veth100.0
         /usr/sbin/brctl addif vzbr1 veth100.1
+
         /usr/sbin/brctl addif br1 veth100.1
  
8.  Verify each bridge includes the host interface and the veth interfaces  
+
8.  Verify each bridge includes the host interface and the veth interfaces for each VE:
for each VE:
 
  
 
         /usr/sbin/brctl show
 
         /usr/sbin/brctl show
  
9.  In the container create the ifcfg network scripts for each interface
+
==Configure the VE networking==
eth0 and eth1.  The scripts should look like standard ifcfg network
+
9.  Enter the VE from the host:
scripts for a host.
 
  
 
         /usr/sbin/vzctl enter 100
 
         /usr/sbin/vzctl enter 100
  
After entering the VE:
+
In the container create the ifcfg network scripts for each interface eth0 and eth1.  The ifcfg-ethN files  should look like standard ifcfg network scripts for a non-VE host.
  
 
         vi /etc/sysconfig/network-scripts/ifcfg-eth0
 
         vi /etc/sysconfig/network-scripts/ifcfg-eth0
 
         vi /etc/sysconfig/network-scripts/ifcfg-eth1
 
         vi /etc/sysconfig/network-scripts/ifcfg-eth1
  
As noted above, the ifcfg-ethN files in the VE should be identical to
+
As noted above, the ifcfg-ethN files in the VE should be created to be identical to standard ifcfg-eth* files from a non-virtualized host. A minimum ifcfg-eth0 file using a static IPv4 address would have the following entries:
standard ifcfg-eth* files containing any required IP configuration info.
 
  
10.  Initialize the interfaces and restart the network service on the
+
        DEVICE=eth0
container.
+
        BOOTPROTO=static
 +
        IPADDR=xxx.xxx.xxx.xxx
 +
        NETMASK=yyy.yyy.yyy.yyy
 +
        ONBOOT=yes
 +
        GATEWAY=zzz.zzz.zzz.zzz 
 +
 
 +
10.  Initialize the interfaces and restart the network service on the container.
  
 
         /sbin/ifconfig eth0 0
 
         /sbin/ifconfig eth0 0
Line 137: Line 134:
 
Alternatively, just restart the VE from the host.
 
Alternatively, just restart the VE from the host.
  
11.  Add FORWARD ACCEPT statements to the host iptables and ip6tables for
+
11.  Verify the host and VE have connectivity to each other as well as to the rest of the network.
each VE IPv4 and IPv6 address. You do NOT need to enable any special
+
 
network forwarding via sysctl.
+
==Additional VEs==
 +
12.  For each additional VE, start at step #5.
 +
 
 +
==Notes on IPv6 autoconfiguration==
 +
If your CT0 is also performing routing duties, you might chance upon the problem that IPv6 stateless autoconfiguration using radvd is not working for the CT's. The description below is specific for a Debian (Lenny) CT0 and Debian (Squeeze) CT.
  
iptables:
+
First check if your CT is actually receiving any router advertisements (RA's). This can be done by installing radvd (apt-get install radvd) and running radvdump. Simply wait for the next round of RA's from radvd, or trigger it (by restarting radvd on CT0, for example). If you do not receive any RA's there is a more fundamental problem. The following only concerns the scenario where the CT receives RA's but does not configure the network interfaces accordingly. Do not forget to remove the radvd package after checking.
        -A FORWARD -s xxx.xxx.xxx.xxx -j ACCEPT
 
        -A FORWARD -d xxx.xxx.xxx.xxx -j ACCEPT
 
  
ip6tables:
+
Because CT0 is performing routing services, all or some values under /proc/sys/net/ipv6/conf/*/forwarding and under /proc/sys/net/ipv6/conf/*/mc_forwarding are set to 1. This appears to override the defaults for these values in the /proc filesystems for the CT's. Unless you explicitly disable forwarding in /etc/sysctl.conf, your CT will also use these values. This means that, to the IPv6 Neighbour Discovery Protocol (NDP) responsible for router advertisements and autoconfiguration, your CT is a router and therefore not allowed to use the RA's to configure the interfaces. To fix this, add or change the following lines to /etc/sysctl.conf '''''on the CT, not on CT0''''':
        -A FORWARD -s xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx -j ACCEPT
+
<pre>
        -A FORWARD -d xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx -j ACCEPT
+
net.ipv6.conf.all.forwarding=0
 +
net.ipv6.conf.all.mc_forwarding=0
 +
</pre>
  
Then restart both iptables and ip6tables on the host:
+
You may also want to explicitly disable IPv4 forwarding since the CT is not a router. To do this, also change the line:
 +
<pre>
 +
net.ipv4.ip_forward=0
 +
</pre>
  
        service iptables restart
+
Now reload sysctl on the CT by executing
        service ip6tables restart
+
<pre>
 +
sysctl -p
 +
</pre>
 +
and you will be good to go. The CT will now autoconfigure the network interfaces the next time it sees an RA.
  
12. Verify the host and VE have connectivity to each other as well as to
+
NOTE: Due to bug [http://bugzilla.openvz.org/show_bug.cgi?id=1723 1723] this setup might not work: Enabling the routing on CT0 can effectively kill all IPv6 connectivity for the CT, depending on the setup. (This bug is reported to be solved since 2011-06-07, so this shouldn't be an issue anymore.)
the rest of the network.
 
  
13.  For each additional VE, start at step #5.
+
==See also==
 +
* [[IPv6]]
 +
* [[Virtual Ethernet device]]
 +
* [[Differences between venet and veth]]

Latest revision as of 13:20, 26 November 2013

This describes a method of setting up networking for a host and its VEs such that the networking configuration for the VEs can be configured exactly as if the VEs were standalone hosts of their own in the same subnets or VLAN as the host. This method makes use of the Virtual Ethernet device and bridges between the host and its containers. This technique has the advantage of allowing IPv6 network configurations to work on both VEs and hosts as they normally would. In particular, both hosts and VEs can use IPv6 autoconfiguration. The network configuration of a VE can be identical to that of a non-VE system.

In the following example the host has two physical interfaces and we are setting up the network configuration for VE 100. The host IP configuration is moved out of the ethN interface configs and into the brN interface config scripts (ifcfg-br0 and ifcfg-br1). Ie. the host IP configuration will now reside on the brN interfaces instead of the ethN interfaces. The example also assumes IPv4 is configured statically, whereas IPv6 is auto-configured.

Configure host bridge interfaces[edit]

Steps 1 through 4 are done only once on the host.

1. (Optional) Verify that you can create a bridge interfaces for each physical interface on the host.

       /usr/sbin/brctl addbr br0
       /usr/sbin/brctl addbr br1

If the above commands do not work you may need to install the bridge-utils package.

2. Make note of the existing IP configuration in the hosts ifcfg-ethN files. Also, record the hardware MAC addresses of the ethernet interfaces from the output of 'ifconfig'.

       /sbin/ifconfig eth0
       /sbin/ifconfig eth1 

Then, modify the ifcfg-ethN files on the host so that they ONLY bridge to the corresponding brN interface. /etc/sysconfig/network-scripts/ifcfg-eth0 should look like:

       DEVICE=eth0
       BOOTPROTO=none
       ONBOOT=yes
       BRIDGE=br0

Similarly ifcfg-eth1 will look like:

       DEVICE=eth1
       BOOTPROTO=none
       ONBOOT=yes
       BRIDGE=br1

Note that the ifcfg-ethN files on the host do not contain any IP information anymore.

3. Create ifcfg-brN files and copy the IP configuration that was previously in the ifcfg-ethN files into ifcfg-brN. Here's what host:/etc/sysconfig/network-scripts/ifcfg-br0 would look like assuming the IPv4 address is assigned statically and IPv6 auto-configuration (SLAAC) is used:

       DEVICE=br0
       BOOTPROTO=static
       IPADDR=xxx.xxx.xxx.xxx
       NETMASK=aaa.aaa.aaa.aaa
       ONBOOT=yes
       TYPE=Bridge
       MACADDR=mm:mm:mm:mm:mm:mm

Similarly, ifcfg-br1 should look like:

       DEVICE=br1
       BOOTPROTO=static
       IPADDR=yyy.yyy.yyy.yyy
       NETMASK=bbb.bbb.bbb.bbb
       ONBOOT=yes
       TYPE=Bridge
       MACADDR=nn:nn:nn:nn:nn:nn

Note that TYPE 'Bridge' is case-sensitive. Otherwise, the bridge interfaces will not initialize correctly during boot.

The bridge MACADDR should be hard-coded to match the corresponding hardware MAC address of the ethernet interface. Otherwise the default behaviour is to use the lowest MAC address of all the interfaces in the bridge. This is to prevent the bridge MAC and any auto-configured IPv6 address on the bridge interface from changing as VEs are created, started, or stopped.

4. On the host, do a 'service network restart' and verify the host has both IPv4 and IPv6 connectivity to its brN interfaces.

Create the VE veth interfaces[edit]

5. Create the VE as you normally would, except do NOT specify any IP address, just the hostname. Specifying an IP address during VE creation creates an unwanted venet interface which is not used in this configuration.

       /usr/sbin/vzctl create 100 --ostemplate name --hostname name

However, if the VE already exists, use vzctl to remove any venet devices - they will not be used:

       /usr/sbin/vzctl set 100 --ipdel all --save

6. For each VE, create ethN devices (ignore warnings about "Container does not have configured veth") on the host:

       /usr/sbin/vzctl set 100 --netif_add eth0 --save
       /usr/sbin/vzctl set 100 --netif_add eth1 --save

The above updates the host /etc/vz/conf/100.conf file with generated MAC addresses for the veth devices. When the VE is started, the veth100.0 and veth100.1 devices will be automatically created on the host.

Bridge the host and VE[edit]

7. Next we add the host vethN interfaces to the host bridged interfaces (brN).

Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.0

       DEVICE=veth100.0
       ONBOOT=no
       BRIDGE=br0

Create host:/etc/sysconfig/network-scripts/ifcfg-veth100.1

       DEVICE=veth100.1
       ONBOOT=no
       BRIDGE=br1

To make the above take effect, either start the VE,

       /usr/sbin/vzctl start 100

Or if it's already started then manually add each VE interface to its corresponding bridge using:

       /usr/sbin/brctl addif br0 veth100.0
       /usr/sbin/brctl addif br1 veth100.1

8. Verify each bridge includes the host interface and the veth interfaces for each VE:

       /usr/sbin/brctl show

Configure the VE networking[edit]

9. Enter the VE from the host:

       /usr/sbin/vzctl enter 100

In the container create the ifcfg network scripts for each interface eth0 and eth1. The ifcfg-ethN files should look like standard ifcfg network scripts for a non-VE host.

       vi /etc/sysconfig/network-scripts/ifcfg-eth0
       vi /etc/sysconfig/network-scripts/ifcfg-eth1

As noted above, the ifcfg-ethN files in the VE should be created to be identical to standard ifcfg-eth* files from a non-virtualized host. A minimum ifcfg-eth0 file using a static IPv4 address would have the following entries:

       DEVICE=eth0
       BOOTPROTO=static
       IPADDR=xxx.xxx.xxx.xxx
       NETMASK=yyy.yyy.yyy.yyy
       ONBOOT=yes
       GATEWAY=zzz.zzz.zzz.zzz  

10. Initialize the interfaces and restart the network service on the container.

       /sbin/ifconfig eth0 0
       /sbin/ifconfig eth1 0
       /sbin/service network restart

Alternatively, just restart the VE from the host.

11. Verify the host and VE have connectivity to each other as well as to the rest of the network.

Additional VEs[edit]

12. For each additional VE, start at step #5.

Notes on IPv6 autoconfiguration[edit]

If your CT0 is also performing routing duties, you might chance upon the problem that IPv6 stateless autoconfiguration using radvd is not working for the CT's. The description below is specific for a Debian (Lenny) CT0 and Debian (Squeeze) CT.

First check if your CT is actually receiving any router advertisements (RA's). This can be done by installing radvd (apt-get install radvd) and running radvdump. Simply wait for the next round of RA's from radvd, or trigger it (by restarting radvd on CT0, for example). If you do not receive any RA's there is a more fundamental problem. The following only concerns the scenario where the CT receives RA's but does not configure the network interfaces accordingly. Do not forget to remove the radvd package after checking.

Because CT0 is performing routing services, all or some values under /proc/sys/net/ipv6/conf/*/forwarding and under /proc/sys/net/ipv6/conf/*/mc_forwarding are set to 1. This appears to override the defaults for these values in the /proc filesystems for the CT's. Unless you explicitly disable forwarding in /etc/sysctl.conf, your CT will also use these values. This means that, to the IPv6 Neighbour Discovery Protocol (NDP) responsible for router advertisements and autoconfiguration, your CT is a router and therefore not allowed to use the RA's to configure the interfaces. To fix this, add or change the following lines to /etc/sysctl.conf on the CT, not on CT0:

net.ipv6.conf.all.forwarding=0
net.ipv6.conf.all.mc_forwarding=0

You may also want to explicitly disable IPv4 forwarding since the CT is not a router. To do this, also change the line:

net.ipv4.ip_forward=0

Now reload sysctl on the CT by executing

sysctl -p

and you will be good to go. The CT will now autoconfigure the network interfaces the next time it sees an RA.

NOTE: Due to bug 1723 this setup might not work: Enabling the routing on CT0 can effectively kill all IPv6 connectivity for the CT, depending on the setup. (This bug is reported to be solved since 2011-06-07, so this shouldn't be an issue anymore.)

See also[edit]