Changes

Jump to: navigation, search

Virtual Ethernet device

2,788 bytes added, 16:39, 2 October 2016
m
no edit summary
<translate><!--T:1-->'''Virtual ethernet Ethernet device''' is an ethernetEthernet-like device which that can be used inside a [[VEcontainer]]. Unlikea [[venet]] network device, a [[veth ]] device has a MAC address. Due to thisTherefore, it can be used in more configurations, when . When veth is bridged to ethX or other device and VE a [[CT0]] network interface (e.g., eth0), the container can act as anindependent host on the network. The container's user fully sets can set up his all of the networking himself, including IPs, gateways , etc.
Virtual ethernet <!--T:2-->A virtual Ethernet device consist consists of two ethernet Ethernet devices - ,one in [[VE0CT0]] (e.g., vethN.0) and another a corresponding one in VECT (e. These devices g., eth0) that are connected to each other, so if . If a packet goes is sent to onedevice it will come out from the other device.
== Virtual ethernet Ethernet device usage ==<!--T:3-->
=== Kernel module ===<!--T:4-->First of all, make sure the The <code>vzethdev</code> module is should be loaded:. You can check it with the following commands.
<pre>
# lsmod | grep vzeth
</pre>
<!--T:5-->
In case it is not loaded, load it:
<pre>
</pre>
You might want to add the module to === MAC addresses === <code>/etc/init.d/vz script</code!--T:6-->The following steps to generate a MAC address are not necessary, so it since newer versionsof vzctl will be loaded during startupautomatically generate a MAC address for you. These steps are providedin case you want to set a MAC address manually.
<!--T:7-->You should use a random MAC address when adding a network interface to a container. Do not use MAC addresses of real eth devices, because this can lead to collisions. <!--T:8-->MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. <!--T:9-->There is a utility script available for generating MAC addresses: https://github.com/moutai/eucalyptus-utils/blob/master/easymac.sh. It is used like this:  <!--T:10-->chmod +x easymac.sh ./easymac.sh -R === Adding veth to a VE CT ===<!--T:11-->  <pre!--T:12-->vzctl set <VEIDCTID> --veth_add netif_add <ifname>[,<dev_namemac>,<dev_addrhost_ifname>,<ve_dev_namehost_mac>,<ve_dev_addrbridge>] </pre!--T:13-->Here * <tt>dev_nameifname</tt> is the ethernet Ethernet device name that you are creating on in the [[VE0|host system]]CT* <tt>dev_addrmac</tt> is its MAC addressin the CT* <tt>ve_dev_namehost_ifname</tt> is the corresponding ethernet Ethernet device name you are creating on the VEhost ([[CT0]])* <tt>ve_dev_addrhost_mac</tt> is its MAC addresson the host ([[CT0]]), if you want independent communication with the Container through the bridge, you should explicitly specify multicast MAC address here (FE:FF:FF:FF:FF:FF).* <tt>bridge</tt> is an optional parameter which can be used in custom network start scripts to automatically add the interface to a bridge. (See the reference to the vznetaddbr script below and persistent bridge configurations.)
MAC addresses must be entered in XX<!--T:XX:XX:XX:XX:XX format14-->{{Note|All parameters except <code>ifname</code> are optional. Note that this optionis incrementalMissing parameters, except for bridge, so devices are added to already existing onesautomatically generated, if not specified.}}
NB there are no spaces after the commas<!--T:15-->Example:
==== Examples ==== <pre!--T:16-->vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,netif_add eth0,00:12:34:56:78:9B --save</pre>After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, because this can lead to collisions.}}
<!--T:17-->
If you want to specify everything:
Warning: sintax seems changed in (current) version vzctl<!--3.0.14. Actual sintax isT:==== Examples ====<pre18-->
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
</pre>
where <!--T:19-->If you want to use independent communication through the bridge:  <!--T:20-->vzctl set 101 --netif_add eth0 is the interface being created in your Virtual Machine, while 00:12:34:56:78:9A,veth101.0 ,FE:FF:FF:FF:FF:FF,vzbr0 --save <!--T:21-->If you want to specify the bridge and autogenerate the other values:  <!--T:22-->vzctl set 101 --netif_add eth0,,,,vzbr0 --save === Removing veth from a CT === <!--T:23-->  <!--T:24-->vzctl set <CTID> --netif_del <dev_name>|all <!--T:25-->Here* <code>dev_name</code> is being created the Ethernet device name in the host machine[[CT]]. <!--T:26-->{{Note|If you want to remove all Ethernet devices in CT, use <code>all</code>.}}
=== Removing veth from a VE ===<pre>vzctl set <VEID> !--T:27--veth_del <dev_name></pre>Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].Example:
==== Example ==== <pre!--T:28-->vzctl set 101 --veth_del veth101.0 netif_del eth0 --save</pre>After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.
== Common configurations with virtual ethernet Ethernet devices ==<!--T:29-->
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.
=== Simple configuration with virtual ethernet Ethernet device ===<!--T:30-->
<!--T:31-->Assuming that 192.168.0.0/24 is being used on your LAN, the following sections show how to configure a container for the LAN using veth. ==== Start a VE CT ====<!--T:32-->  <pre!--T:33-->
[host-node]# vzctl start 101
</pre>
==== Add veth device to VE CT ====<!--T:34-->  <pre!--T:35-->[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,netif_add eth0,00:12:34:56:78:9B --save </pre!--T:36-->This allocates a MAC address and associates it with the host eth0 port.
==== Configure devices in VE0 CT0 ====<!--T:37-->The following steps are needed when the [[CT]] is '''not''' bridged to a [[CT0]] network interface. That is because the [[CT]] is connected to a virtual network that is "behind" [[CT0]]. [[CT0]] must forward packets between its physical network interface and the virtual network interface where [[CT]] is located. The first step below to configure the interface is not necessary if the container has been started, since the device will have been initialized.
<pre>
[host-node]# ifconfig veth101.0 0
</pre>
==== Configure device in VE CT ====<!--T:38-->The following steps show an example of a quick manual configuration of the [[CT]] network interface. Typically, you would configure the network settings in /etc/network/interfaces (Debian, see below) or however it is normally configured on your distribution. You can also comment or remove the configuration for venet0, if it exists, because that device will not be used.
<pre>
[host-node]# vzctl enter 101
</pre>
<!--T:39-->Notes:* Until you ifconfig eth0 it won't appear. When you do it will use the mac address netif_add added earlier* 192.168.0.101 is chosen to be an [[unrouteable private ip address]]. Where 101 reminds you that it is node 101.* The "ip route" tells all traffic to head to "device eth0"* In theory you could [[use dhcpd with OpenVZ]] and dhclient to pick up an DHCP address from your router instead of hardwiring it** http://openvz.org/pipermail/users/2005-November/000020.html ==== Add route in [[VE0CT0]] ====<pre!--T:40-->Since [[CT0]] is acting as a router between its physical network interface and the virtual network interface of the [[CT]], we need to add a route to the [[CT]] to direct traffic to the right destination. [host-node]# ip route add 192.168.0.101 dev veth101.0 === Using a directly routed IPv4 with virtual Ethernet device === <!--T:41--> ==== Situation ==== <!--T:42-->Hardware Node (HN/CT0) has 192.168.0.1/pre24 with router 192.168.0.254. <!--T:43-->We also know that IPv4 10.0.0.1/32 is directly routed to 192.168.0.1 (this is called a ''fail-over IP''). <!--T:44-->We want to give this directly routed IPv4 address to a container (CT).
=== Virtual ethernet device with IPv6 =Start container ==== <!--T:45-->
==== Start [[VE]] ==== <pre!--T:46-->
[host-node]# vzctl start 101
</pre>
==== Add veth device to CT ==== <!--T:47-->  <!--T:48-->[[VE]host-node] # vzctl set 101 --netif_add eth0 --save <!--T:49-->This allocates a MAC address and associates it with the host eth0 port. ==== Configure device and add route in CT0 ====<!--T:50--> <!--T:51-->
<pre>
[host-node]# vzctl set 101 ifconfig veth101.0 0[host--veth_add node]# ip route add 10.0.0.1 dev veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save
</pre>
==== Configure devices <!--T:52-->You can automatize this at VPS creation by using a mount script <tt>$VEID.mount</tt>. <!--T:53-->The problem here is that the ''veth'' interface appears in [[VE0]] ====CT0 '''after''' VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator '''&''') that waits for the interface to be ready and then adds the IP route. <!--T:54-->Contents of the mount script <tt>/etc/vz/conf/101.mount</tt>:
<pre>
[host#!/bin/bash# This script source VPS configuration files in the same order as vzctl does <!--T:55--node]># ifconfig veth101.0 0if one of these files does not exist then something is really broken[host-node]# echo 1 > f /procetc/sys/net/ipv6vz/vz.conf/veth101.0/forwarding] || exit 1[host-nodef $VE_CONFFILE ]# echo || exit 1  <!--T:56--> # source both files. Note the order, it is important. /procetc/sys/net/ipv6vz/vz.conf/eth0/forwarding[host. $VE_CONFFILE <!--T:57--node]># echo Configure veth with IP after VPS has started{ IP=X.Y.Z.T DEV=veth101.0 while sleep 1 > ; do /procsbin/sysifconfig $DEV 0 >/netdev/ipv6null 2>&1 if [ $? -eq 0 ]; then /confsbin/all/forwardingip route add $IP dev $DEV break fi done} &
</pre>
==== Configure device Make sure IPv4 forwarding is enabled in [[VE]] CT0 ====<!--T:58--> <!--T:59-->
<pre>
[host-node]# vzctl enter 101echo 1 > /proc/sys/net/ipv4/ip_forward[vehost-101node]# echo 1 > /sbinproc/sys/net/ipv4/conf/ifconfig eth0 /forwarding[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding
</pre>
You can permanently set this by using <tt>/etc/sysctl.conf</tt>.
==== Start router advertisement daemon (radvd) for IPv6 Configure device in VE0 CT ====First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>!--T:<pre60-->interface veth101.0{ AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvHomeAgentFlag off;
prefix 3ffe<!--T:2400:0:0::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; };61-->};1. Configure IP address
interface eth0<!--T:62-->{ AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvHomeAgentFlag off;2. Add gateway
prefix 3ffe<!--T:0302:0011:0002::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off;63--> };};</pre>3. Add default route
Then, start radvd<!--T:64-->
<pre>
[hostve-node101]# /etcsbin/initifconfig eth0 10.d0.0.1 netmask 255.255.255.255[ve-101]# /sbin/ip route add 192.168.0.1 dev eth0[ve-101]# /radvd startsbin/ip route add default via 192.168.0.1
</pre>
==== Add IPv6 addresses to devices in [[VE0]] ====<!--T:65-->In a Debian container, you can configure this permanently by using <tt>/etc/network/interfaces</tt>:
<pre>
[host-node]# ip addr add dev veth101auto eth0iface eth0 inet static address 10.0.0 3ffe:2400::212:34ff:fe56:789a.1 netmask 255.255.255.255 up /sbin/64[host-node]# ip addr route add 192.168.0.1 dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455 up /sbin/64ip route add default via 192.168.0.1
</pre>
=== Virtual ethernet devices can be joined in one bridge Ethernet device with IPv6 ===<!--T:66-->Perform steps 1 <!- 4 from Simple configuration chapter for several -T:67-->See the [[VEs and/or veth devicesHNs in same subnets]] article.
=== Independent Virtual Ethernet communication through the bridge === <!--T:68-->Bridging a [[CT]] interface to a [[CT0]] interface is the magic that allows the [[CT]] to be an independent host on the network with its own IP address, gateway, etc. [[CT0]] does not need any configuration for forwarding packets to the [[CT]] or performing proxy arp for the [[CT]] or event the routing. <!--T:69-->To manually configure a bridge and add devices to it, perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices using FE:FF:FF:FF:FF:FF as a [[CT0]] veth side MAC address and then follow these steps. ==== Create bridge device ====<!--T:70-->
<pre>
[host-node]# brctl addbr vzbr0
</pre>
==== Add veth devices to bridge ====<!--T:71-->
<pre>
[host-node]# brctl addif vzbr0 veth101.0
</pre>
==== Configure bridge device ====<!--T:72-->
<pre>
[host-node]# ifconfig vzbr0 0
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp
</pre>
 
==== Add routes in [[VE0]] ====
<pre>
[host-node]# ip route add 192.168.101.1 dev vzbr0
...
[host-node]# ip route add 192.168.101.n dev vzbr0
[host-node]# ip route add 192.168.102.1 dev vzbr0
...
...
[host-node]# ip route add 192.168.XXX.N dev vzbr0
</pre>
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.    === Making a veth-device persistent Automating the bridge ===<!--T:73-->At the moment, it The most convenient method is not possible to have automatically create the commands needed for bridge at boot as a persistent veth being made automatically be vzctl. A bugreport ( http://bugzilla.openvz.org/show_bug.cgi?id=301 ) has already been made. Until network interface, add the physical interface from [[CT0]] and then, here's a way to make add the above steps persistent (for a debian based system in this example). ==== Cleaning ${VEID}.conf =========Option A)=====Open up /etc/vz/conf/VEIDinterface from each [[CT]] as it starts.conf and comment out any IP_ADDRESS-entries All devices are connected to prevent a VENET-device from being created in virtual switch, and containers directly access the VEnetwork just as any other host without additional configuration on [[CT0]]. Add or change the entry CONFIG_CUSTOMIZED="yes". 
=====Option B)=====Follow Option A and add in addition a VETH_IP_ADDRESS="<your VE IP!--T:74-->" entry In Debian, configure the network interface on [[CT0]] to your plug into a bridge in /etc/vz/confnetwork/VEIDinterfaces.conf including The [[CT0]] physical device is added to the IP Address you want bridge as the "uplink" port to the physical network. You need to have bridge-utils installed for this to setwork.
====Adding an external script <!--T:75-->The bridge forwarding delay is set to 0 seconds so that forwarding begins immediately when a new interface is added to VE0 ====Copy a bridge. The default delay is 30 seconds, during which the bridge pauses all traffic to listen and paste figure out where devices are. This can interrupt services when a container is added to the following code bridge. If you aren't running the spanning tree protocol (either Option A or Boff by default) into /usr/sbin/vznetaddroute:=====Option A)=====and the bridge does not create a loop in your network, then there is no need for a forwarding delay.
<pre>
#!/bin/bash## This script adds the appropriate VE0-route for veth-enabled VEs.# See http://wiki.openvz.org/Virtual_Ethernet_device for more information.#iface eth0 inet manual
# check the VEIDif [ "${VEID}" == 101 ]; then echo "Adding interface veth101.0 and route 192.168.0.101 for VE101 to VE0" /sbin/ifconfig veth101.0 0 echo 1 <!--T:76--> /proc/sys/net/ipv4/conf/veth101.0/forwarding echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arpauto vzbr0 echo 1 > /proc/sys/net/ipv4/conf/eth0/forwardingiface vzbr0 inet static echo 1 > /proc/sys/net/ipv4/conf/ bridge_ports eth0/proxy_arp /sbin/ip route add 192.168.0.101 dev veth101. bridge_fd 0elsif [ "${VEID}" == 102 ]; then echo "Adding interface veth102.0 and route address 192.168.01.102 for VE101 to VE0"100 /sbin/ifconfig veth101 netmask 255.0 0 echo 1 > /proc/sys/net/ipv4/conf/veth102255.0/forwarding echo 1 > /proc/sys/net/ipv4/conf/veth102255.0/proxy_arp echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp /sbin/ip route add gateway 192.168.0.102 dev veth102.0elsif [ "${VEID}" == YOUR_VE ]; then # same as above with the vethYOUR_VE1.0 device and the appropriate ipfiexit254
</pre>
Follow the steps below for making a veth bridge persistent with the included script. That will automatically add each container to the bridge when it is started. Finally, specify vzbr0 as the bridge when adding the network interface to a container, as describe above. No configuration is needed on [[CT0]] for forwarding packets, proxy arp or additional routes. The interface in each [[CT]] can be configured as desired. Everything "just works" according to normal network interface configuration and default routing rules. Note that as discussed in the troubleshooting section below, bridged packets by default pass through the FORWARD iptables chain. Take care when adding rules to that table that bridged packets are not mistakenly blocked. This behavior can be disabled, if desired (sysctl: <code>net.bridge.bridge-nf-call-iptables</code>).
Add one elsif-section for every veth-enabled VE you'd like to have automatically configured. Remember to run <pre>chmod +x /usr/sbin/vznetaddroute</pre> to make the script executable.====Making a veth-device persistent =Option B)=====<pre!--T:77-->#!/bin/bashThese steps are no longer necessary, as the veth device is automatically created when the container is started. They remain here as a reference.
VZCONFDIR=<!--T:78-->According to http://etcbugzilla.openvz.org/vzVZHOSTIFshow_bug.cgi?id=$BASH_ARGV301 , a bug that stopped the veth device persistent was "Obsoleted now when --veth_add/--veth_del are introduced"
<!--T:79-->See http://wiki.openvz. $VZCONFDIRorg/confw/$VEIDindex.php?title=Virtual_Ethernet_device&diff=5990&oldid=5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.conf
if [ <!--T:80--n $VETH_IP_ADDRESS ]; then> echo "Adding That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the interface $VZHOSTIF is being configured and a new route $VETH_IP_ADDRESS for VE$VEID to VE0" /sbin/ifconfig $VZHOSTIF 0 echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/proxy_arp echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/forwarding /sbin/ip route add $VETH_IP_ADDRESS dev $VZHOSTIFelse echo "found no VETH_IP_ADDRESS in $VZCONFDIR/conf/$VEIDbeing added.conf!" exit 1;fiexit</pre>Remember And you should be able to run <pre>chmod +x /usr/sbin/vznetaddroute</pre> ping the host, and to make enter the CT and use the script executablenetwork.
=== Making a bridged veth-device persistent === <!--T:81-->
<!--T:82-->
Like the above example, here it is how to add the veth device to a bridge in a persistent way.
====Make <!--T:83-->vzctl run includes a 'vznetaddbr' script, which makes use of the script====''bridge'' parameter of the --netif_add switch.
To make vzctl run the script, copy and paste the following line to <!--T:84-->Just create /etc/vz/vznet.conf:<pre>#!/bin/bashEXTERNAL_SCRIPT="/usr/sbin/vznetaddroute"</pre>The script will now run every time a veth-enabled VE is startedcontaining the following==== Adding a script to VE ====
Now we're done with VE0, we still need to add a route to the VE itself. So we start up the VE with <pre>vzctl start 101</pre>, get into it with <pre>vzctl enter 101</pre> and create a new file /etc/init.d/route!--up in the VE with the following contentT:85-->
<pre>
#!EXTERNAL_SCRIPT="/bin/bashusr/sbin/ip route add default dev eth0vznetaddbr"
</pre>
Make the script executable with <pre>chmod +x /etc/init.d/route!--T:86--up</pre> and add it to the runlevels:Or just run command
<pre>
ve101:/# update-rc.d route-up defaults Adding system startup for /etc/init.d/route-up ... /etc/rc0.d/K20route-up -> ../init.d/route-up [...]</pre> ==== Checking ===echo 'EXTERNAL_SCRIPT=Now to see if everything worked, leave the VE with <pre>exit<"/pre>, stop the VE via <pre>vzctl stop 101<usr/pre> and restart it with <pre>vzctl start 101<sbin/pre>. Still in VE0, check the route for the VE:<prevznetaddbr"' >ve0:/# ip route ls192.168.0.101 dev veth101.0 scope link[...]ve0:/# ping 192.168.0.101 -c 4 -q[...]--- 192.168.0.101 ping statistics ---4 packets transmitted, 4 recieved, 0% packet loss, time 0ms</pre> If somethings not working, check the contents of the files we just created or changed. Now get into the VE via <pre>vzctl enter 101</pre> and check the routing there:<pre>ve101:/# ifconfig eth0 Link encap:Ethernet HWaddr 00:12:34:56:78:9B inet addr:192.168.0.101 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:92 errors:0 dropped:0 overruns:0 frame:0 TX packets:94 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6757 (6.5 KiB) TX bytes:10396 (10.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)ve101:etc/# ip route lsdefault dev eth0 scope linkve101:vz/# ping 192.168.0.101 -c 4 -q[...]--- 192.168.0vznet.101 ping statistics ---4 packets transmitted, 4 recieved, 0% packet loss, time 0msconf
</pre>
If you have problems getting it persistent, please comment<!--T:87-->The script uses 'vmbr0' as default bridge name when no bridge is specified.
=== Virtual ethernet Ethernet devices + VLAN ===<!--T:88-->
This configuration can be done by adding vlan device to the previous configuration.
== See also ==<!--T:89-->
* [[Virtual network device]]
* [[Differences between venet and veth]]
* [[Using private IPs for Hardware Nodes]]
* Patch: [[Disable venet interface]]
* Troubleshooting: [[Bridge doesn't forward packets]]
== External links ==<!--T:90-->
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]
* [http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html 2 veth with 2 bridges setup]
* [https://forum.proxmox.com/threads/physical-host-with-2-nics-each-with-different-gateways.1733/#post-9287 Non default gateway for CentOS OpenVZ container] - this applies to BlueOnyx in Proxmox as well. | [[Media:TwoGWsPVECentOS.pdf|Cache]]
</translate>
[[Category: Networking]]
[[Category: HOWTO]]
20
edits

Navigation menu