Difference between revisions of "Virtual Ethernet device"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(syntax vzctl version >= 3.0.14: + Added easymac script link + usage ==db)
m
 
(98 intermediate revisions by 37 users not shown)
Line 1: Line 1:
'''Virtual ethernet device''' is an ethernet-like device which can be used inside a [[VE]]. Unlike
+
<translate>
[[venet]] network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to ethX or other device and VE user fully sets up his networking himself,  
+
<!--T:1-->
including IPs, gateways etc.
+
'''Virtual Ethernet device''' is an Ethernet-like device that can be used
 +
inside a [[container]]. Unlike a [[venet]] network device, a [[veth]] device
 +
has a MAC address. Therefore, it can be used in more configurations. When veth
 +
is bridged to a [[CT0]] network interface (e.g., eth0), the container can act as an
 +
independent host on the network. The container's user can set up all of the networking
 +
himself, including IPs, gateways, etc.
  
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one  
+
<!--T:2-->
in VE. These devices are connected to each other, so if a packet goes to one
+
A virtual Ethernet device consists of two Ethernet devices,
device it will come out from the other device.
+
one in [[CT0]] (e.g., vethN.0) and a corresponding one in CT (e.g., eth0) that are
 +
connected to each other. If a packet is sent to one device it will come out the other device.
  
== Virtual ethernet device usage ==
+
== Virtual Ethernet device usage == <!--T:3-->
  
=== Kernel module ===
+
=== Kernel module === <!--T:4-->
First of all, make sure the <code>vzethdev</code> module is loaded:
+
The <code>vzethdev</code> module should be loaded. You can check it with the following commands.
 
<pre>
 
<pre>
 
# lsmod | grep vzeth
 
# lsmod | grep vzeth
Line 18: Line 24:
 
</pre>
 
</pre>
  
 +
<!--T:5-->
 
In case it is not loaded, load it:
 
In case it is not loaded, load it:
 
<pre>
 
<pre>
Line 23: Line 30:
 
</pre>
 
</pre>
  
You might want to add the module to <code>/etc/init.d/vz script</code>, so it will be loaded during startup.
+
=== MAC addresses === <!--T:6-->
 +
The following steps to generate a MAC address are not necessary, since newer versions
 +
of vzctl will automatically generate a MAC address for you. These steps are provided
 +
in case you want to set a MAC address manually.
  
{{Note|since vzctl version 3.0.11, vzethdev is loaded by /etc/init.d/vz}}
+
<!--T:7-->
 +
You should use a random MAC address when adding a network interface to a container. Do not use MAC addresses of real eth devices, because this can lead to collisions.
  
 +
<!--T:8-->
 +
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.
  
=== Adding veth to a VE ===
+
<!--T:9-->
 +
There is a utility script available for generating MAC addresses: https://github.com/moutai/eucalyptus-utils/blob/master/easymac.sh. It is used like this:
  
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, because this can lead to collisions and MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.}}
+
<!--T:10-->
 +
chmod +x easymac.sh
 +
./easymac.sh -R
  
==== syntax vzctl version < 3.0.14 ====
+
=== Adding veth to a CT === <!--T:11-->
  
<pre>
+
<!--T:12-->
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr>
+
vzctl set <CTID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac>,<bridge>]
</pre>
 
  
Here  
+
<!--T:13-->
* <tt>dev_name</tt> is the ethernet device name that you are creating on the [[VE0|host system]]
+
Here
* <tt>dev_addr</tt> is its MAC address
+
* <tt>ifname</tt> is the Ethernet device name in the CT
* <tt>ve_dev_name</tt> is the corresponding ethernet device name you are creating on the VE
+
* <tt>mac</tt> is its MAC address in the CT
* <tt>ve_dev_addr</tt> is its MAC address
+
* <tt>host_ifname</tt> is the Ethernet device name on the host ([[CT0]])
 
+
* <tt>host_mac</tt> is its MAC address on the host ([[CT0]]), if you want independent communication with the Container through the bridge, you should explicitly specify multicast MAC address here (FE:FF:FF:FF:FF:FF).
{{Note| that this option is incremental, so devices are added to already existing ones.}}
+
* <tt>bridge</tt> is an optional parameter which can be used in custom network start scripts to automatically add  the interface to a bridge. (See the reference to the vznetaddbr script below and persistent bridge configurations.)
  
NB there are no spaces after the commas
+
<!--T:14-->
 +
{{Note|All parameters except <code>ifname</code> are optional. Missing parameters, except for bridge, are automatically generated, if not specified.}}
  
 +
<!--T:15-->
 
Example:
 
Example:
  
<pre>
+
<!--T:16-->
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save
+
vzctl set 101 --netif_add eth0 --save
</pre>
 
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.
 
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.
 
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.
 
 
 
 
 
==== syntax vzctl version >= 3.0.14 ====
 
 
 
Read Update infos about [http://openvz.org/news/updates/vzctl-3.0.14-1 vzctl 3.0.14]
 
 
 
<pre>
 
vzctl set <VEID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac]
 
</pre>
 
 
 
Here
 
* <tt>ifname</tt> is the ethernet device name in the VE
 
* <tt>mac</tt> is its MAC address in the VE
 
* <tt>host_ifname</tt>  is the ethernet device name on the host ([[VE0]])
 
* <tt>host_mac</tt> is its MAC address on the host ([[VE0]])
 
 
 
{{Note|All parameters except ifname are optional and are automatically generated if not specified.}}
 
  
Example:
+
<!--T:17-->
 +
If you want to specify everything:
  
<pre>
+
<!--T:18-->
 
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
 
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
</pre>
 
  
o There is a utility script available for generating MAC addresses:  
+
<!--T:19-->
http://www.easyvmx.com/software/easymac.sh
+
If you want to use independent communication through the bridge:
  
o Recommended Usage: ' chmod +x easymac.sh; ./easymac.sh -R '
+
<!--T:20-->
 +
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,FE:FF:FF:FF:FF:FF,vzbr0 --save
  
=== Removing veth from a VE ===
+
<!--T:21-->
 +
If you want to specify the bridge and autogenerate the other values:
  
==== syntax vzctl version < 3.0.14 ====
+
<!--T:22-->
 +
vzctl set 101 --netif_add eth0,,,,vzbr0 --save
  
<pre>
+
=== Removing veth from a CT === <!--T:23-->
vzctl set <VEID> --veth_del <dev_name>
 
</pre>
 
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].
 
  
Example:
+
<!--T:24-->
<pre>
+
vzctl set <CTID> --netif_del <dev_name>|all
vzctl set 101 --veth_del veth101.0 --save
 
</pre>
 
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.
 
 
 
 
 
==== syntax vzctl version >= 3.0.14 ====
 
<pre>
 
vzctl set <VEID> --netif_del <dev_name>|all
 
</pre>
 
  
 +
<!--T:25-->
 
Here
 
Here
* <code>dev_name</code> is the ethernet device name in the [[VE]].
+
* <code>dev_name</code> is the Ethernet device name in the [[CT]].
  
{{Note|If you want to remove all ethernet devices in VE, use <code>all</code>.}}
+
<!--T:26-->
 +
{{Note|If you want to remove all Ethernet devices in CT, use <code>all</code>.}}
  
 +
<!--T:27-->
 
Example:
 
Example:
  
<pre>
+
<!--T:28-->
 
vzctl set 101 --netif_del eth0 --save
 
vzctl set 101 --netif_del eth0 --save
</pre>
 
  
== Common configurations with virtual ethernet devices ==
+
== Common configurations with virtual Ethernet devices == <!--T:29-->
 
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.
 
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.
  
=== Simple configuration with virtual ethernet device ===
+
=== Simple configuration with virtual Ethernet device === <!--T:30-->
  
==== Start a VE ====
+
<!--T:31-->
<pre>
+
Assuming that 192.168.0.0/24 is being used on your LAN, the following sections show how to configure a container for the LAN using veth.
 +
 
 +
==== Start a CT ==== <!--T:32-->
 +
 
 +
<!--T:33-->
 
[host-node]# vzctl start 101
 
[host-node]# vzctl start 101
</pre>
 
  
==== Add veth device to VE ====
+
==== Add veth device to CT ==== <!--T:34-->
<pre>
+
 
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save
+
<!--T:35-->
</pre>
+
[host-node]# vzctl set 101 --netif_add eth0 --save
 +
 
 +
<!--T:36-->
 +
This allocates a MAC address and associates it with the host eth0 port.
  
==== Configure devices in VE0 ====
+
==== Configure devices in CT0 ==== <!--T:37-->
 +
The following steps are needed when the [[CT]] is '''not''' bridged to a [[CT0]] network interface. That is because the [[CT]] is connected to a virtual network that is "behind" [[CT0]]. [[CT0]] must forward packets between its physical network interface and the virtual network interface where [[CT]] is located. The first step below to configure the interface is not necessary if the container has been started, since the device will have been initialized.
 
<pre>
 
<pre>
 
[host-node]# ifconfig veth101.0 0
 
[host-node]# ifconfig veth101.0 0
Line 141: Line 137:
 
</pre>
 
</pre>
  
==== Configure device in VE ====
+
==== Configure device in CT ==== <!--T:38-->
 +
The following steps show an example of a quick manual configuration of the [[CT]] network interface. Typically, you would configure the network settings in /etc/network/interfaces (Debian, see below) or however it is normally configured on your distribution. You can also comment or remove the configuration for venet0, if it exists, because that device will not be used.
 
<pre>
 
<pre>
 
[host-node]# vzctl enter 101
 
[host-node]# vzctl enter 101
Line 149: Line 146:
 
</pre>
 
</pre>
  
==== Add route in [[VE0]] ====
+
<!--T:39-->
<pre>
+
Notes:
[host-node]# ip route add 192.168.0.101 dev veth101.0
+
* Until you ifconfig eth0 it won't appear. When you do it will use the mac address netif_add added earlier
</pre>
+
* 192.168.0.101 is chosen to be an [[unrouteable private ip address]]. Where 101 reminds you that it is node 101.
 +
* The "ip route" tells all traffic to head to "device eth0"
 +
* In theory you could [[use dhcpd with OpenVZ]] and dhclient to pick up an DHCP address from your router instead of hardwiring it
 +
** http://openvz.org/pipermail/users/2005-November/000020.html
 +
 
 +
==== Add route in [[CT0]] ==== <!--T:40-->
 +
Since [[CT0]] is acting as a router between its physical network interface and the virtual network interface of the [[CT]], we need to add a route to the [[CT]] to direct traffic to the right destination.
 +
[host-node]# ip route add 192.168.0.101 dev veth101.0
 +
 
 +
=== Using a directly routed IPv4 with virtual Ethernet device === <!--T:41-->
 +
 
 +
==== Situation ==== <!--T:42-->
 +
Hardware Node (HN/CT0) has 192.168.0.1/24 with router 192.168.0.254.
 +
 
 +
<!--T:43-->
 +
We also know that IPv4 10.0.0.1/32 is directly routed to 192.168.0.1 (this is called a ''fail-over IP'').
 +
 
 +
<!--T:44-->
 +
We want to give this directly routed IPv4 address to a container (CT).
  
=== Virtual ethernet device with IPv6 ===
+
==== Start container ==== <!--T:45-->
  
==== Start [[VE]] ====
+
<!--T:46-->
<pre>
 
 
[host-node]# vzctl start 101
 
[host-node]# vzctl start 101
</pre>
 
  
==== Add veth device to [[VE]] ====
+
==== Add veth device to CT ==== <!--T:47-->
 +
 
 +
<!--T:48-->
 +
[host-node]# vzctl set 101 --netif_add eth0 --save
 +
 
 +
<!--T:49-->
 +
This allocates a MAC address and associates it with the host eth0 port.
 +
 
 +
==== Configure device and add route in CT0 ==== <!--T:50-->
 +
 
 +
<!--T:51-->
 
<pre>
 
<pre>
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save
+
[host-node]# ifconfig veth101.0 0
 +
[host-node]# ip route add 10.0.0.1 dev veth101.0
 
</pre>
 
</pre>
  
==== Configure devices in [[VE0]] ====
+
<!--T:52-->
 +
You can automatize this at VPS creation by using a mount script <tt>$VEID.mount</tt>.
 +
 
 +
<!--T:53-->
 +
The problem here is that the ''veth'' interface appears in CT0 '''after''' VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator '''&''') that waits for the interface to be ready and then adds the IP route.
 +
 
 +
<!--T:54-->
 +
Contents of the mount script <tt>/etc/vz/conf/101.mount</tt>:
 
<pre>
 
<pre>
[host-node]# ifconfig veth101.0 0
+
#!/bin/bash
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding
+
# This script source VPS configuration files in the same order as vzctl does
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding
+
 
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
+
<!--T:55-->
 +
# if one of these files does not exist then something is really broken
 +
[ -f /etc/vz/vz.conf ] || exit 1
 +
[ -f $VE_CONFFILE ] || exit 1
 +
 
 +
<!--T:56-->
 +
# source both files. Note the order, it is important
 +
. /etc/vz/vz.conf
 +
. $VE_CONFFILE
 +
 
 +
<!--T:57-->
 +
# Configure veth with IP after VPS has started
 +
{
 +
  IP=X.Y.Z.T
 +
  DEV=veth101.0
 +
  while sleep 1; do
 +
    /sbin/ifconfig $DEV 0 >/dev/null 2>&1
 +
    if [ $? -eq 0 ]; then
 +
      /sbin/ip route add $IP dev $DEV
 +
      break
 +
    fi
 +
  done
 +
} &
 
</pre>
 
</pre>
  
==== Configure device in [[VE]] ====
+
==== Make sure IPv4 forwarding is enabled in CT0 ==== <!--T:58-->
 +
 
 +
<!--T:59-->
 
<pre>
 
<pre>
[host-node]# vzctl enter 101
+
[host-node]# echo 1 > /proc/sys/net/ipv4/ip_forward
[ve-101]# /sbin/ifconfig eth0 0
+
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding
 +
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding
 
</pre>
 
</pre>
 +
You can permanently set this by using <tt>/etc/sysctl.conf</tt>.
  
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====
+
==== Configure device in CT ==== <!--T:60-->
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:
 
<pre>
 
interface veth101.0
 
{
 
        AdvSendAdvert on;
 
        MinRtrAdvInterval 3;
 
        MaxRtrAdvInterval 10;
 
        AdvHomeAgentFlag off;
 
  
        prefix 3ffe:2400:0:0::/64
+
<!--T:61-->
        {
+
1. Configure IP address
                AdvOnLink on;
 
                AdvAutonomous on;
 
                AdvRouterAddr off;
 
        };
 
};
 
  
interface eth0
+
<!--T:62-->
{
+
2. Add gateway
        AdvSendAdvert on;
 
        MinRtrAdvInterval 3;
 
        MaxRtrAdvInterval 10;
 
        AdvHomeAgentFlag off;
 
  
        prefix 3ffe:0302:0011:0002::/64
+
<!--T:63-->
        {
+
3. Add default route
                AdvOnLink on;
 
                AdvAutonomous on;
 
                AdvRouterAddr off;
 
        };
 
};
 
</pre>
 
  
Then, start radvd:
+
<!--T:64-->
 
<pre>
 
<pre>
[host-node]# /etc/init.d/radvd start
+
[ve-101]# /sbin/ifconfig eth0 10.0.0.1 netmask 255.255.255.255
 +
[ve-101]# /sbin/ip route add 192.168.0.1 dev eth0
 +
[ve-101]# /sbin/ip route add default via 192.168.0.1
 
</pre>
 
</pre>
  
==== Add IPv6 addresses to devices in [[VE0]] ====
+
<!--T:65-->
 +
In a Debian container, you can configure this permanently by using <tt>/etc/network/interfaces</tt>:
 
<pre>
 
<pre>
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64
+
auto eth0
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64
+
iface eth0 inet static
 +
        address 10.0.0.1
 +
        netmask 255.255.255.255
 +
        up /sbin/ip route add 192.168.0.1 dev eth0
 +
        up /sbin/ip route add default via 192.168.0.1
 
</pre>
 
</pre>
  
=== Virtual ethernet devices can be joined in one bridge ===
+
=== Virtual Ethernet device with IPv6 === <!--T:66-->
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices
+
 
 +
<!--T:67-->
 +
See the [[VEs and HNs in same subnets]] article.
  
==== Create bridge device ====
+
=== Independent Virtual Ethernet communication through the bridge === <!--T:68-->
 +
Bridging a [[CT]] interface to a [[CT0]] interface is the magic that allows the [[CT]] to be an independent host on the network with its own IP address, gateway, etc. [[CT0]] does not need any configuration for forwarding packets to the [[CT]] or performing proxy arp for the [[CT]] or event the routing.
 +
 
 +
<!--T:69-->
 +
To manually configure a bridge and add devices to it, perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices using FE:FF:FF:FF:FF:FF as a [[CT0]] veth side MAC address and then follow these steps.
 +
 
 +
==== Create bridge device ==== <!--T:70-->
 
<pre>
 
<pre>
 
[host-node]# brctl addbr vzbr0
 
[host-node]# brctl addbr vzbr0
 
</pre>
 
</pre>
  
==== Add veth devices to bridge ====
+
==== Add veth devices to bridge ==== <!--T:71-->
 
<pre>
 
<pre>
 
[host-node]# brctl addif vzbr0 veth101.0
 
[host-node]# brctl addif vzbr0 veth101.0
Line 244: Line 293:
 
</pre>
 
</pre>
  
==== Configure bridge device ====
+
==== Configure bridge device ==== <!--T:72-->
 
<pre>
 
<pre>
 
[host-node]# ifconfig vzbr0 0
 
[host-node]# ifconfig vzbr0 0
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding
 
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp
 
 
</pre>
 
</pre>
  
==== Add routes in [[VE0]] ====
+
=== Automating the bridge === <!--T:73-->
 +
The most convenient method is to automatically create the bridge at boot as a network interface, add the physical interface from [[CT0]] and then add the interface from each [[CT]] as it starts. All devices are connected to a virtual switch, and containers directly access the network just as any other host without additional configuration on [[CT0]].
 +
 
 +
<!--T:74-->
 +
In Debian, configure the network interface on [[CT0]] to plug into a bridge in /etc/network/interfaces. The [[CT0]] physical device is added to the bridge as the "uplink" port to the physical network.  You need to have bridge-utils installed for this to work.
 +
 
 +
<!--T:75-->
 +
The bridge forwarding delay is set to 0 seconds so that forwarding begins immediately when a new interface is added to a bridge. The default delay is 30 seconds, during which the bridge pauses all traffic to listen and figure out where devices are. This can interrupt services when a container is added to the bridge. If you aren't running the spanning tree protocol (off by default) and the bridge does not create a loop in your network, then there is no need for a forwarding delay.
 
<pre>
 
<pre>
[host-node]# ip route add 192.168.101.1 dev vzbr0
+
iface eth0 inet manual
...
+
 
[host-node]# ip route add 192.168.101.n dev vzbr0
+
<!--T:76-->
[host-node]# ip route add 192.168.102.1 dev vzbr0
+
auto vzbr0
...
+
iface vzbr0 inet static
...
+
        bridge_ports eth0
[host-node]# ip route add 192.168.XXX.N dev vzbr0
+
        bridge_fd 0
 +
        address 192.168.1.100
 +
        netmask 255.255.255.0
 +
        gateway 192.168.1.254
 
</pre>
 
</pre>
 +
Follow the steps below for making a veth bridge persistent with the included script. That will automatically add each container to the bridge when it is started. Finally, specify vzbr0 as the bridge when adding the network interface to a container, as describe above. No configuration is needed on [[CT0]] for forwarding packets, proxy arp or additional routes. The interface in each [[CT]] can be configured as desired. Everything "just works" according to normal network interface configuration and default routing rules. Note that as discussed in the troubleshooting section below, bridged packets by default pass through the FORWARD iptables chain. Take care when adding rules to that table that bridged packets are not mistakenly blocked. This behavior can be disabled, if desired (sysctl: <code>net.bridge.bridge-nf-call-iptables</code>).
  
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.
+
=== Making a veth-device persistent === <!--T:77-->
 +
These steps are no longer necessary, as the veth device is automatically created when the container is started. They remain here as a reference.
  
 +
<!--T:78-->
 +
According to http://bugzilla.openvz.org/show_bug.cgi?id=301 , a bug that stopped the veth device persistent was "Obsoleted now when --veth_add/--veth_del are introduced"
  
 +
<!--T:79-->
 +
See http://wiki.openvz.org/w/index.php?title=Virtual_Ethernet_device&diff=5990&oldid=5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.
  
 +
<!--T:80-->
 +
That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the CT and use the network.
  
=== Making a veth-device persistent ===
+
=== Making a bridged veth-device persistent === <!--T:81-->
  
At the moment, it is not possible to have the commands needed for a persistent veth being made automatically be vzctl. A  bugreport ( http://bugzilla.openvz.org/show_bug.cgi?id=301 ) has already been made. Until then, here's a way to make the above steps persistent.
+
<!--T:82-->
 +
Like the above example, here it is how to add the veth device to a bridge in a persistent way.  
  
1. First, edit the VE's configuration to specify what the veth's IP address(es) should be, and to indicate that a custom script should be run when starting up a VE.
+
<!--T:83-->
* Open up /etc/vz/conf/VEID.conf
+
vzctl includes a 'vznetaddbr' script, which makes use of the ''bridge'' parameter of the --netif_add switch.
* Comment out any IP_ADDRESS entries to prevent a VENET-device from being created in the VE
 
* Add or change the entry CONFIG_CUSTOMIZED="yes"
 
* Add an entry VETH_IP_ADDRESS="<VE IP>" The VE IP can have multiple IPs, separated by spaces
 
  
2. Now to create that "custom script". The following helper script will check the configuration file for IP addresses and for the veth interface, and configure the IP routing accordingly. Create the script /usr/sbin/vznetaddroute to have the following, and then <code>chmod 0500 /usr/sbin/vznetaddroute</code> to make it executable.
+
<!--T:84-->
 +
Just create /etc/vz/vznet.conf containing the following.
  
 +
<!--T:85-->
 
<pre>
 
<pre>
#!/bin/bash
+
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
# /usr/sbin/vznetaddroute
 
# a script to bring up bridged network interfaces (veth's) in a VE
 
 
 
CONFIGFILE=/etc/vz/conf/$VEID.conf
 
. $CONFIGFILE
 
VZHOSTIF=`echo $NETIF |sed 's/^.*host_ifname=\(.*\),.*$/\1/g'`
 
 
 
if [ ! -n "$VETH_IP_ADDRESS" ]; then
 
  echo "According to $CONFIGFILE VE$VEIDI has no veth IPs configured."
 
  exit 1
 
fi
 
 
 
if [ ! -n "$VZHOSTIF" ]; then
 
  echo "According to $CONFIGFILE VE$VEIDI has no veth interface configured."
 
  exit 1
 
fi
 
 
 
for IP in $VETH_IP_ADDRESS; do
 
  echo "Adding interface $VZHOSTIF and route $IP for VE$VEID to VE0"
 
  /sbin/ifconfig $VZHOSTIF 0
 
  echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/proxy_arp
 
  echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/forwarding
 
  /sbin/ip route add $IP dev $VZHOSTIF
 
done
 
 
 
exit 0
 
 
</pre>
 
</pre>
  
3. Now create /etc/vz/vznet.conf containing the following. This is what defines the "custom script" as being the vznetaddroute which you just created.
+
<!--T:86-->
 
+
Or just run command
 
<pre>
 
<pre>
#!/bin/bash
+
echo 'EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"' > /etc/vz/vznet.conf
EXTERNAL_SCRIPT="/usr/sbin/vznetaddroute"
 
 
</pre>
 
</pre>
  
4. Of course, the VE's operating system will need to be configured with those IP address(es) as well. Consult the manual for your VE's OS for details.
+
<!--T:87-->
 
+
The script uses 'vmbr0' as default bridge name when no bridge is specified.
That's it! At this point, when you restart the VE you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the VE and use the network.
 
  
=== Virtual ethernet devices + VLAN ===
+
=== Virtual Ethernet devices + VLAN === <!--T:88-->
 
This configuration can be done by adding vlan device to the previous configuration.
 
This configuration can be done by adding vlan device to the previous configuration.
  
== See also ==
+
== See also == <!--T:89-->
 
* [[Virtual network device]]
 
* [[Virtual network device]]
 
* [[Differences between venet and veth]]
 
* [[Differences between venet and veth]]
 +
* [[Using private IPs for Hardware Nodes]]
 +
* Patch: [[Disable venet interface]]
 +
* Troubleshooting: [[Bridge doesn't forward packets]]
  
== External links ==
+
== External links == <!--T:90-->
 
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]
 
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]
 +
* [http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html 2 veth with 2 bridges setup]
 +
* [https://forum.proxmox.com/threads/physical-host-with-2-nics-each-with-different-gateways.1733/#post-9287 Non default gateway for CentOS OpenVZ container] - this applies to BlueOnyx in Proxmox as well. | [[Media:TwoGWsPVECentOS.pdf|Cache]]
  
 +
</translate>
  
 
[[Category: Networking]]
 
[[Category: Networking]]
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Latest revision as of 16:39, 2 October 2016

<translate> Virtual Ethernet device is an Ethernet-like device that can be used inside a container. Unlike a venet network device, a veth device has a MAC address. Therefore, it can be used in more configurations. When veth is bridged to a CT0 network interface (e.g., eth0), the container can act as an independent host on the network. The container's user can set up all of the networking himself, including IPs, gateways, etc.

A virtual Ethernet device consists of two Ethernet devices, one in CT0 (e.g., vethN.0) and a corresponding one in CT (e.g., eth0) that are connected to each other. If a packet is sent to one device it will come out the other device.

Virtual Ethernet device usage[edit]

Kernel module[edit]

The vzethdev module should be loaded. You can check it with the following commands.

# lsmod | grep vzeth
vzethdev                8224  0
vzmon                  35164  5 vzethdev,vznetdev,vzrst,vzcpt
vzdev                   3080  4 vzethdev,vznetdev,vzmon,vzdquota

In case it is not loaded, load it:

# modprobe vzethdev

MAC addresses[edit]

The following steps to generate a MAC address are not necessary, since newer versions of vzctl will automatically generate a MAC address for you. These steps are provided in case you want to set a MAC address manually.

You should use a random MAC address when adding a network interface to a container. Do not use MAC addresses of real eth devices, because this can lead to collisions.

MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.

There is a utility script available for generating MAC addresses: https://github.com/moutai/eucalyptus-utils/blob/master/easymac.sh. It is used like this:

chmod +x easymac.sh

./easymac.sh -R

Adding veth to a CT[edit]

vzctl set <CTID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac>,<bridge>]

Here

  • ifname is the Ethernet device name in the CT
  • mac is its MAC address in the CT
  • host_ifname is the Ethernet device name on the host (CT0)
  • host_mac is its MAC address on the host (CT0), if you want independent communication with the Container through the bridge, you should explicitly specify multicast MAC address here (FE:FF:FF:FF:FF:FF).
  • bridge is an optional parameter which can be used in custom network start scripts to automatically add the interface to a bridge. (See the reference to the vznetaddbr script below and persistent bridge configurations.)
Yellowpin.svg Note: All parameters except ifname are optional. Missing parameters, except for bridge, are automatically generated, if not specified.

Example:

vzctl set 101 --netif_add eth0 --save

If you want to specify everything:

vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save

If you want to use independent communication through the bridge:

vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,FE:FF:FF:FF:FF:FF,vzbr0 --save

If you want to specify the bridge and autogenerate the other values:

vzctl set 101 --netif_add eth0,,,,vzbr0 --save

Removing veth from a CT[edit]

vzctl set <CTID> --netif_del <dev_name>|all

Here

  • dev_name is the Ethernet device name in the CT.
Yellowpin.svg Note: If you want to remove all Ethernet devices in CT, use all.

Example:

vzctl set 101 --netif_del eth0 --save

Common configurations with virtual Ethernet devices[edit]

Module vzethdev must be loaded to operate with veth devices.

Simple configuration with virtual Ethernet device[edit]

Assuming that 192.168.0.0/24 is being used on your LAN, the following sections show how to configure a container for the LAN using veth.

Start a CT[edit]

[host-node]# vzctl start 101

Add veth device to CT[edit]

[host-node]# vzctl set 101 --netif_add eth0 --save

This allocates a MAC address and associates it with the host eth0 port.

Configure devices in CT0[edit]

The following steps are needed when the CT is not bridged to a CT0 network interface. That is because the CT is connected to a virtual network that is "behind" CT0. CT0 must forward packets between its physical network interface and the virtual network interface where CT is located. The first step below to configure the interface is not necessary if the container has been started, since the device will have been initialized.

[host-node]# ifconfig veth101.0 0
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

Configure device in CT[edit]

The following steps show an example of a quick manual configuration of the CT network interface. Typically, you would configure the network settings in /etc/network/interfaces (Debian, see below) or however it is normally configured on your distribution. You can also comment or remove the configuration for venet0, if it exists, because that device will not be used.

[host-node]# vzctl enter 101
[ve-101]# /sbin/ifconfig eth0 0
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0
[ve-101]# /sbin/ip route add default dev eth0

Notes:

Add route in CT0[edit]

Since CT0 is acting as a router between its physical network interface and the virtual network interface of the CT, we need to add a route to the CT to direct traffic to the right destination.

[host-node]# ip route add 192.168.0.101 dev veth101.0

Using a directly routed IPv4 with virtual Ethernet device[edit]

Situation[edit]

Hardware Node (HN/CT0) has 192.168.0.1/24 with router 192.168.0.254.

We also know that IPv4 10.0.0.1/32 is directly routed to 192.168.0.1 (this is called a fail-over IP).

We want to give this directly routed IPv4 address to a container (CT).

Start container[edit]

[host-node]# vzctl start 101

Add veth device to CT[edit]

[host-node]# vzctl set 101 --netif_add eth0 --save

This allocates a MAC address and associates it with the host eth0 port.

Configure device and add route in CT0[edit]

[host-node]# ifconfig veth101.0 0
[host-node]# ip route add 10.0.0.1 dev veth101.0

You can automatize this at VPS creation by using a mount script $VEID.mount.

The problem here is that the veth interface appears in CT0 after VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator &) that waits for the interface to be ready and then adds the IP route.

Contents of the mount script /etc/vz/conf/101.mount:

#!/bin/bash
# This script source VPS configuration files in the same order as vzctl does

<!--T:55-->
# if one of these files does not exist then something is really broken
[ -f /etc/vz/vz.conf ] || exit 1
[ -f $VE_CONFFILE ] || exit 1

<!--T:56-->
# source both files. Note the order, it is important
. /etc/vz/vz.conf
. $VE_CONFFILE

<!--T:57-->
# Configure veth with IP after VPS has started
{
  IP=X.Y.Z.T
  DEV=veth101.0
  while sleep 1; do
    /sbin/ifconfig $DEV 0 >/dev/null 2>&1
    if [ $? -eq 0 ]; then
      /sbin/ip route add $IP dev $DEV
      break
    fi
  done
} &

Make sure IPv4 forwarding is enabled in CT0[edit]

[host-node]# echo 1 > /proc/sys/net/ipv4/ip_forward
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding

You can permanently set this by using /etc/sysctl.conf.

Configure device in CT[edit]

1. Configure IP address

2. Add gateway

3. Add default route

[ve-101]# /sbin/ifconfig eth0 10.0.0.1 netmask 255.255.255.255
[ve-101]# /sbin/ip route add 192.168.0.1 dev eth0
[ve-101]# /sbin/ip route add default via 192.168.0.1

In a Debian container, you can configure this permanently by using /etc/network/interfaces:

auto eth0
iface eth0 inet static
        address 10.0.0.1
        netmask 255.255.255.255
        up /sbin/ip route add 192.168.0.1 dev eth0
        up /sbin/ip route add default via 192.168.0.1

Virtual Ethernet device with IPv6[edit]

See the VEs and HNs in same subnets article.

Independent Virtual Ethernet communication through the bridge[edit]

Bridging a CT interface to a CT0 interface is the magic that allows the CT to be an independent host on the network with its own IP address, gateway, etc. CT0 does not need any configuration for forwarding packets to the CT or performing proxy arp for the CT or event the routing.

To manually configure a bridge and add devices to it, perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices using FE:FF:FF:FF:FF:FF as a CT0 veth side MAC address and then follow these steps.

Create bridge device[edit]

[host-node]# brctl addbr vzbr0

Add veth devices to bridge[edit]

[host-node]# brctl addif vzbr0 veth101.0
...
[host-node]# brctl addif vzbr0 veth101.n
[host-node]# brctl addif vzbr0 veth102.0
...
...
[host-node]# brctl addif vzbr0 vethXXX.N

Configure bridge device[edit]

[host-node]# ifconfig vzbr0 0

Automating the bridge[edit]

The most convenient method is to automatically create the bridge at boot as a network interface, add the physical interface from CT0 and then add the interface from each CT as it starts. All devices are connected to a virtual switch, and containers directly access the network just as any other host without additional configuration on CT0.

In Debian, configure the network interface on CT0 to plug into a bridge in /etc/network/interfaces. The CT0 physical device is added to the bridge as the "uplink" port to the physical network. You need to have bridge-utils installed for this to work.

The bridge forwarding delay is set to 0 seconds so that forwarding begins immediately when a new interface is added to a bridge. The default delay is 30 seconds, during which the bridge pauses all traffic to listen and figure out where devices are. This can interrupt services when a container is added to the bridge. If you aren't running the spanning tree protocol (off by default) and the bridge does not create a loop in your network, then there is no need for a forwarding delay.

iface eth0 inet manual

<!--T:76-->
auto vzbr0
iface vzbr0 inet static
        bridge_ports eth0
        bridge_fd 0
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.254

Follow the steps below for making a veth bridge persistent with the included script. That will automatically add each container to the bridge when it is started. Finally, specify vzbr0 as the bridge when adding the network interface to a container, as describe above. No configuration is needed on CT0 for forwarding packets, proxy arp or additional routes. The interface in each CT can be configured as desired. Everything "just works" according to normal network interface configuration and default routing rules. Note that as discussed in the troubleshooting section below, bridged packets by default pass through the FORWARD iptables chain. Take care when adding rules to that table that bridged packets are not mistakenly blocked. This behavior can be disabled, if desired (sysctl: net.bridge.bridge-nf-call-iptables).

Making a veth-device persistent[edit]

These steps are no longer necessary, as the veth device is automatically created when the container is started. They remain here as a reference.

According to http://bugzilla.openvz.org/show_bug.cgi?id=301 , a bug that stopped the veth device persistent was "Obsoleted now when --veth_add/--veth_del are introduced"

See http://wiki.openvz.org/w/index.php?title=Virtual_Ethernet_device&diff=5990&oldid=5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.

That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the CT and use the network.

Making a bridged veth-device persistent[edit]

Like the above example, here it is how to add the veth device to a bridge in a persistent way.

vzctl includes a 'vznetaddbr' script, which makes use of the bridge parameter of the --netif_add switch.

Just create /etc/vz/vznet.conf containing the following.

EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"

Or just run command

echo 'EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"' > /etc/vz/vznet.conf

The script uses 'vmbr0' as default bridge name when no bridge is specified.

Virtual Ethernet devices + VLAN[edit]

This configuration can be done by adding vlan device to the previous configuration.

See also[edit]

External links[edit]

</translate>