Virtual Ethernet device
Virtual Ethernet device is an Ethernet-like device that can be used inside a container. Unlike a venet network device, a veth device has a MAC address. Therefore, it can be used in more configurations. When veth is bridged to a CT0 network interface (e.g., eth0), the container can act as an independent host on the network. The container's user can set up all of the networking himself, including IPs, gateways, etc.
A virtual Ethernet device consist of two Ethernet devices, one in CT0 (e.g., vethN.0) and a corresponding one in CT (e.g., eth0) that are connected to each other. If a packet is sent to one device it will come out the other device.
Contents
Virtual Ethernet device usage
Kernel module
The vzethdev
module should be loaded. You can check it with the following commands.
# lsmod | grep vzeth vzethdev 8224 0 vzmon 35164 5 vzethdev,vznetdev,vzrst,vzcpt vzdev 3080 4 vzethdev,vznetdev,vzmon,vzdquota
In case it is not loaded, load it:
# modprobe vzethdev
MAC addresses
The following steps to generate a MAC address are not necessary, since newer versions of vzctl will automatically generate a MAC address for you. These steps are provided in case you want to set a MAC address manually.
You should use a random MAC address when adding a network interface to a container. Do not use MAC addresses of real eth devices, because this can lead to collisions.
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.
There is a utility script available for generating MAC addresses: http://www.easyvmx.com/software/easymac.sh. It is used like this:
chmod +x easymac.sh ./easymac.sh -R
Adding veth to a CT
vzctl set <CTID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac>,<bridge>]
Here
- ifname is the Ethernet device name in the CT
- mac is its MAC address in the CT
- host_ifname is the Ethernet device name on the host (CT0)
- host_mac is its MAC address on the host (CT0)
- bridge is an optional parameter which can be used in custom network start scripts to automatically add the interface to a bridge. (See the reference to the vznetaddbr script below and persistent bridge configurations.)
Note: All parameters except ifname are optional. Missing parameters, except for bridge, are automatically generated, if not specified. This is the preferred method.
|
Example:
vzctl set 101 --netif_add eth0 --save
If you want to specify everything:
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
If you want to specify the bridge and autogenerate the other values:
vzctl set 101 --netif_add eth0,,,,vmbr1 --save
Removing veth from a CT
vzctl set <CTID> --netif_del <dev_name>|all
Here
dev_name
is the Ethernet device name in the CT.
Note: If you want to remove all Ethernet devices in CT, use all .
|
Example:
vzctl set 101 --netif_del eth0 --save
Common configurations with virtual Ethernet devices
Module vzethdev must be loaded to operate with veth devices.
Simple configuration with virtual Ethernet device
Assuming you have 192.168.0.0/24 on your LAN, you will learn how to integrate a container in this LAN using veth.
Start a CT
[host-node]# vzctl start 101
Add veth device to CT
[host-node]# vzctl set 101 --netif_add eth0 --save
This allocates a MAC address and associates it with the host eth0 port.
Configure devices in CT0
The following steps are needed when the CT is not bridged to a CT0 network interface. That is because the CT is connected to a virtual network that is "behind" CT0. CT0 must forward packets between its physical network interface and the virtual network interface where CT is located. The first step below to configure the interface is not necessary if the container has been started, since the device will have been initialized.
[host-node]# ifconfig veth101.0 0 [host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding [host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp [host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding [host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
Configure device in CT
The following steps show an example of a quick manual configuration of the CT network interface. Typically, you would configure the network settings in /etc/network/interfaces (Debian, see below) or however it is normally configured on your distribution. You can also comment or remove the configuration for venet0, if it exists, because that device will not be used.
[host-node]# vzctl enter 101 [ve-101]# /sbin/ifconfig eth0 0 [ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0 [ve-101]# /sbin/ip route add default dev eth0
Notes:
- Until you ifconfig eth0 it won't appear. When you do it will use the mac address netif_add added earlier
- 192.168.0.101 is chosen to be an unrouteable private ip address. Where 101 reminds you that it is node 101.
- The "ip route" tells all traffic to head to "device eth0"
- In theory you could use dhcpd with OpenVZ and dhclient to pick up an DHCP address from your router instead of hardwiring it
Add route in CT0
Since CT0 is acting as a router between its physical network interface and the virtual network interface of the CT, we need to add a route to the CT to direct traffic to the right destination.
[host-node]# ip route add 192.168.0.101 dev veth101.0
Using a directly routed IPv4 with virtual Ethernet device
Situation
Hardware Node (HN/CT0) has 192.168.0.1/24 with router 192.168.0.254.
We also know that IPv4 10.0.0.1/32 is directly routed to 192.168.0.1 (this is called a fail-over IP).
We want to give this directly routed IPv4 address to a container (CT).
Start container
[host-node]# vzctl start 101
Add veth device to CT
[host-node]# vzctl set 101 --netif_add eth0 --save
This allocates a MAC address and associates it with the host eth0 port.
Configure device and add route in CT0
[host-node]# ifconfig veth101.0 0 [host-node]# ip route add 10.0.0.1 dev veth101.0
You can automatize this at VPS creation by using a mount script $VEID.mount.
The problem here is that the veth interface appears in CT0 after VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator &) that waits for the interface to be ready and then adds the IP route.
Contents of the mount script /etc/vz/conf/101.mount:
#!/bin/bash # This script source VPS configuration files in the same order as vzctl does # if one of these files does not exist then something is really broken [ -f /etc/vz/vz.conf ] || exit 1 [ -f $VE_CONFFILE ] || exit 1 # source both files. Note the order, it is important . /etc/vz/vz.conf . $VE_CONFFILE # Configure veth with IP after VPS has started { IP=X.Y.Z.T DEV=veth101.0 while sleep 1; do /sbin/ifconfig $DEV 0 >/dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/ip route add $IP dev $DEV break fi done } &
Make sure IPv4 forwarding is enabled in CT0
[host-node]# echo 1 > /proc/sys/net/ipv4/ip_forward [host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding [host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding
You can permanently set this by using /etc/sysctl.conf.
Configure device in CT
1. Configure IP address
2. Add gateway
3. Add default route
[ve-101]# /sbin/ifconfig eth0 10.0.0.1 netmask 255.255.255.255 [ve-101]# /sbin/ip route add 192.168.0.1 dev eth0 [ve-101]# /sbin/ip route add default via 192.168.0.1
In a Debian container, you can configure this permanently by using /etc/network/interfaces:
auto eth0 iface eth0 inet static address 10.0.0.1 netmask 255.255.255.255 up /sbin/ip route add 192.168.0.1 dev eth0 up /sbin/ip route add default via 192.168.0.1
Virtual Ethernet device with IPv6
See the VEs and HNs in same subnets article.
Virtual Ethernet devices can be joined in one bridge
Perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices
Create bridge device
[host-node]# brctl addbr vzbr0
Add veth devices to bridge
[host-node]# brctl addif vzbr0 veth101.0 ... [host-node]# brctl addif vzbr0 veth101.n [host-node]# brctl addif vzbr0 veth102.0 ... ... [host-node]# brctl addif vzbr0 vethXXX.N
Configure bridge device
[host-node]# ifconfig vzbr0 0
Add routes in CT0
[host-node]# ip route add 192.168.101.1 dev vzbr0 ... [host-node]# ip route add 192.168.101.n dev vzbr0 [host-node]# ip route add 192.168.102.1 dev vzbr0 ... ... [host-node]# ip route add 192.168.XXX.N dev vzbr0
Thus you'll have more convinient configuration, i.e. all routes to containers will be through this bridge and containers can communicate with each other even without these routes.
Making a veth-device persistent
According to http://bugzilla.openvz.org/show_bug.cgi?id=301 , a bug that stopped the veth device persistent was "Obsoleted now when --veth_add/--veth_del are introduced"
See http://wiki.openvz.org/w/index.php?title=Virtual_Ethernet_device&diff=5990&oldid=5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.
That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the CT and use the network.
Making a bridged veth-device persistent
Like the above example, here it is how to add the veth device to a bridge in a persistent way.
vzctl include a 'vznetaddbr' script, which makes use of the bridge parameter of the --netif_add switch.
Just create /etc/vz/vznet.conf containing the following.
#!/bin/bash EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
The script uses 'vmbr0' as default bridge name when no bridge is specified.
Virtual Ethernet devices + VLAN
This configuration can be done by adding vlan device to the previous configuration.