Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

Virtual Ethernet device

2,187 bytes added, 16:39, 2 October 2016
m
no edit summary
<translate><!--T:1-->'''Virtual ethernet Ethernet device''' is an ethernetEthernet-like device which that can be used inside a [[VEcontainer]]. Unlikea [[venet]] network device, a [[veth ]] device has a MAC address. Due to thisTherefore, it can be used in more configurations, when . When veth is bridged to ethX or other device and VE a [[CT0]] network interface (e.g., eth0), the container can act as anindependent host on the network. The container's user fully sets can set up his all of the networking himself, including IPs, gateways , etc.
Virtual ethernet <!--T:2-->A virtual Ethernet device consist consists of two ethernet Ethernet devices - ,one in [[CT0]] (e.g., vethN.0) and another a corresponding one in VECT (e. These devices g., eth0) that are connected to each other, so if . If a packet goes is sent to onedevice it will come out from the other device.
== Virtual ethernet Ethernet device usage ==<!--T:3-->
=== Kernel module ===<!--T:4-->First of all, make sure the The <code>vzethdev</code> module is should be loaded:. You can check it with the following commands.
<pre>
# lsmod | grep vzeth
</pre>
<!--T:5-->
In case it is not loaded, load it:
<pre>
</pre>
You might want to add the module to === MAC addresses === <code>/etc/init.d/vz script</code!--T:6-->The following steps to generate a MAC address are not necessary, so it since newer versionsof vzctl will be loaded during startupautomatically generate a MAC address for you. These steps are providedin case you want to set a MAC address manually.
{{Note|since vzctl version 3.0.11, vzethdev is loaded by /etc/init.d/vz}}<!--T:7--> === MAC addresses ===In the below commands, you You should use a random MAC addressesaddress when adding a network interface to a container. Do not use MAC addresses of real eth devices, because this can lead to collisions.
<!--T:8-->
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.
<!--T:9-->There is a utility script available for generating MAC addresses: httphttps://www.easyvmxgithub.com/softwaremoutai/eucalyptus-utils/blob/master/easymac.sh. It is to be used like this:
<!--T:10-->chmod +x easymac.sh
./easymac.sh -R
=== Adding veth to a VE CT ===<!--T:11-->
==== syntax <!--T:12-->vzctl version set <CTID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac>,< 3.0.14 ====bridge>]
<pre!--T:13-->Here* <tt>ifname</tt>is the Ethernet device name in the CTvzctl set * <VEIDtt> --veth_add mac</tt> is its MAC address in the CT* <tt>host_ifname<dev_name/tt>, is the Ethernet device name on the host ([[CT0]])* <dev_addrtt>,host_mac<ve_dev_name/tt>is its MAC address on the host ([[CT0]]), if you want independent communication with the Container through the bridge,you should explicitly specify multicast MAC address here (FE:FF:FF:FF:FF:FF).* <ve_dev_addrtt>bridge</prett>is an optional parameter which can be used in custom network start scripts to automatically add the interface to a bridge. (See the reference to the vznetaddbr script below and persistent bridge configurations.)
Here * <tt!--T:14-->dev_name</tt> is the ethernet device name that you are creating on the [[CT0{{Note|host system]]* <tt>dev_addr</tt> is its MAC address* All parameters except <ttcode>ve_dev_nameifname</ttcode> is the corresponding ethernet device name you are creating on the VE* <tt>ve_dev_addr</tt> is its MAC address {{Note| that this option is incrementaloptional. Missing parameters, except for bridge, so devices are added to already existing onesautomatically generated, if not specified.}} NB there are no spaces after the commas
<!--T:15-->
Example:
<pre>vzctl set 101 !--veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78T:9B 16--save</pre>After executing this command <tt>veth</tt> device will be created for VE vzctl set 101 and veth configuration will be saved to a VE configuration file.Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.VE-side ethernet device will have <tt>netif_add eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.  ==== syntax vzctl version >= 3.0.14 ==== Read Update infos about [http://openvz.org/news/updates/vzctl-3.0.14-1 vzctl 3.0.14] <pre>vzctl set <VEID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac]</pre> Here* <tt>ifname</tt> is the ethernet device name in the VE* <tt>mac</tt> is its MAC address in the VE* <tt>host_ifname</tt> is the ethernet device name on the host ([[CT0]])* <tt>host_mac</tt> is its MAC address on the host ([[CT0]])save
{{Note|All parameters except ifname are optional and are automatically generated if not specified.}}<!--T:17-->If you want to specify everything:
Example <!--T<pre18-->
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
</pre>
=== Removing veth from a VE ===<!--T:19-->If you want to use independent communication through the bridge:
==== syntax <!--T:20-->vzctl version < 3set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0.14 ====,FE:FF:FF:FF:FF:FF,vzbr0 --save
<pre>vzctl set <VEID> !--T:21--veth_del <dev_name></pre>Here <tt>dev_name</tt> is If you want to specify the ethernet device name in bridge and autogenerate the [[CT0|host system]].other values:
Example <!--T:<pre22-->vzctl set 101 --veth_del veth101.0 netif_add eth0,,,,vzbr0 --save</pre>After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.
=== Removing veth from a CT === <!--T:23-->
==== syntax vzctl version >= 3.0.14 ==== <pre!--T:24-->vzctl set <VEIDCTID> --netif_del <dev_name>|all</pre>
<!--T:25-->
Here
* <code>dev_name</code> is the ethernet Ethernet device name in the [[VECT]].
<!--T:26-->{{Note|If you want to remove all ethernet Ethernet devices in VECT, use <code>all</code>.}}
<!--T:27-->
Example:
<pre!--T:28-->
vzctl set 101 --netif_del eth0 --save
</pre>
== Common configurations with virtual ethernet Ethernet devices ==<!--T:29-->
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.
=== Simple configuration with virtual ethernet Ethernet device ===<!--T:30-->
<!--T:31-->Assuming that 192.168.0.0/24 is being used on your LAN, the following sections show how to configure a container for the LAN using veth. ==== Start a VE CT ====<!--T:32-->  <pre!--T:33-->
[host-node]# vzctl start 101
</pre>
==== Add veth device to VE CT ====<!--T:34-->  <pre!--T:35-->[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,netif_add eth0,00:12:34:56:78:9B --save </pre!--T:36-->This allocates a MAC address and associates it with the host eth0 port.
==== Configure devices in CT0 ====<!--T:37-->The following steps are needed when the [[CT]] is '''not''' bridged to a [[CT0]] network interface. That is because the [[CT]] is connected to a virtual network that is "behind" [[CT0]]. [[CT0]] must forward packets between its physical network interface and the virtual network interface where [[CT]] is located. The first step below to configure the interface is not necessary if the container has been started, since the device will have been initialized.
<pre>
[host-node]# ifconfig veth101.0 0
</pre>
==== Configure device in VE CT ====<!--T:38-->The following steps show an example of a quick manual configuration of the [[CT]] network interface. Typically, you would configure the network settings in /etc/network/interfaces (Debian, see below) or however it is normally configured on your distribution. You can also comment or remove the configuration for venet0, if it exists, because that device will not be used.
<pre>
[host-node]# vzctl enter 101
</pre>
<!--T:39-->Notes:* Until you ifconfig eth0 it won't appear. When you do it will use the mac address netif_add added earlier* 192.168.0.101 is chosen to be an [[unrouteable private ip address]]. Where 101 reminds you that it is node 101.* The "ip route" tells all traffic to head to "device eth0"* In theory you could [[use dhcpd with OpenVZ]] and dhclient to pick up an DHCP address from your router instead of hardwiring it** http://openvz.org/pipermail/users/2005-November/000020.html ==== Add route in [[CT0]] ====<pre!--T:40-->Since [[CT0]] is acting as a router between its physical network interface and the virtual network interface of the [[CT]], we need to add a route to the [[CT]] to direct traffic to the right destination. [host-node]# ip route add 192.168.0.101 dev veth101.0 === Using a directly routed IPv4 with virtual Ethernet device === <!--T:41--> ==== Situation ==== <!--T:42-->Hardware Node (HN/CT0) has 192.168.0.1/24 with router 192.168.0.254. <!--T:43-->We also know that IPv4 10.0.0.1/pre32 is directly routed to 192.168.0.1 (this is called a ''fail-over IP''). <!--T:44-->We want to give this directly routed IPv4 address to a container (CT).
=== Virtual ethernet device with IPv6 =Start container ==== <!--T:45-->
==== Start [[VE]] ==== <pre!--T:46-->
[host-node]# vzctl start 101
</pre>
==== Add veth device to CT ==== <!--T:47-->  <!--T:48-->[[VE]host-node] # vzctl set 101 --netif_add eth0 --save <!--T:49-->This allocates a MAC address and associates it with the host eth0 port. ==== Configure device and add route in CT0 ====<!--T:50--> <!--T:51-->
<pre>
[host-node]# vzctl set 101 ifconfig veth101.0 0[host--veth_add node]# ip route add 10.0.0.1 dev veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save
</pre>
==== Configure devices <!--T:52-->You can automatize this at VPS creation by using a mount script <tt>$VEID.mount</tt>. <!--T:53-->The problem here is that the ''veth'' interface appears in [[CT0]] ===='''after''' VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator '''&''') that waits for the interface to be ready and then adds the IP route. <!--T:54-->Contents of the mount script <tt>/etc/vz/conf/101.mount</tt>:
<pre>
[host#!/bin/bash# This script source VPS configuration files in the same order as vzctl does <!--T:55--node]># ifconfig veth101.0 0if one of these files does not exist then something is really broken[host-node]# echo 1 > f /procetc/sys/net/ipv6vz/vz.conf/veth101.0/forwarding] || exit 1[host-nodef $VE_CONFFILE ]# echo || exit 1  <!--T:56--> # source both files. Note the order, it is important. /procetc/sys/net/ipv6vz/vz.conf/eth0/forwarding[host. $VE_CONFFILE <!--T:57--node]># echo Configure veth with IP after VPS has started{ IP=X.Y.Z.T DEV=veth101.0 while sleep 1 > ; do /procsbin/sysifconfig $DEV 0 >/netdev/ipv6null 2>&1 if [ $? -eq 0 ]; then /confsbin/all/forwardingip route add $IP dev $DEV break fi done} &
</pre>
==== Configure device Make sure IPv4 forwarding is enabled in [[VE]] CT0 ====<!--T:58--> <!--T:59-->
<pre>
[host-node]# vzctl enter 101echo 1 > /proc/sys/net/ipv4/ip_forward[vehost-101node]# echo 1 > /sbinproc/sys/net/ipv4/conf/ifconfig eth0 /forwarding[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding
</pre>
You can permanently set this by using <tt>/etc/sysctl.conf</tt>.
==== Start router advertisement daemon (radvd) for IPv6 Configure device in CT0 CT ====First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>!--T:<pre60-->interface veth101.0{ AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvHomeAgentFlag off;
prefix 3ffe<!--T:2400:0:0::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; };61-->};1. Configure IP address
interface eth0<!--T:62-->{ AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvHomeAgentFlag off;2. Add gateway
prefix 3ffe<!--T:0302:0011:0002::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off;63--> };};</pre>3. Add default route
Then, start radvd<!--T:64-->
<pre>
[hostve-node101]# /etcsbin/initifconfig eth0 10.d0.0.1 netmask 255.255.255.255[ve-101]# /sbin/ip route add 192.168.0.1 dev eth0[ve-101]# /radvd startsbin/ip route add default via 192.168.0.1
</pre>
==== Add IPv6 addresses to devices in [[CT0]] ====<!--T:65-->In a Debian container, you can configure this permanently by using <tt>/etc/network/interfaces</tt>:
<pre>
[host-node]# ip addr add dev veth101auto eth0iface eth0 inet static address 10.0.0 3ffe:2400::212:34ff:fe56:789a.1 netmask 255.255.255.255 up /sbin/64[host-node]# ip addr route add 192.168.0.1 dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455 up /sbin/64ip route add default via 192.168.0.1
</pre>
=== Virtual ethernet devices can be joined in one bridge Ethernet device with IPv6 ===Perform steps 1 <!- 4 from Simple configuration chapter for several containers and/or veth devices-T:66-->
<!--T:67-->See the [[VEs and HNs in same subnets]] article. === Independent Virtual Ethernet communication through the bridge === <!--T:68-->Bridging a [[CT]] interface to a [[CT0]] interface is the magic that allows the [[CT]] to be an independent host on the network with its own IP address, gateway, etc. [[CT0]] does not need any configuration for forwarding packets to the [[CT]] or performing proxy arp for the [[CT]] or event the routing. <!--T:69-->To manually configure a bridge and add devices to it, perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices using FE:FF:FF:FF:FF:FF as a [[CT0]] veth side MAC address and then follow these steps. ==== Create bridge device ====<!--T:70-->
<pre>
[host-node]# brctl addbr vzbr0
</pre>
==== Add veth devices to bridge ====<!--T:71-->
<pre>
[host-node]# brctl addif vzbr0 veth101.0
</pre>
==== Configure bridge device ====<!--T:72-->
<pre>
[host-node]# ifconfig vzbr0 0
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp
</pre>
==== Add routes in [[CT0]] Automating the bridge ====<pre!--T:73-->The most convenient method is to automatically create the bridge at boot as a network interface, add the physical interface from [[host-nodeCT0]]# ip route and then add 192.168.101.1 dev vzbr0...the interface from each [host-node]# ip route add 192.168.101.n dev vzbr0[host-nodeCT]# ip route add 192.168.102.1 dev vzbr0......[host-node]# ip route add 192as it starts.168.XXX.N dev vzbr0</pre> Thus you'll have more convinient configurationAll devices are connected to a virtual switch, i.e. all routes to containers will be through this bridge and containers can communicate with each directly access the network just as any other even host without these routesadditional configuration on [[CT0]].    === Making a veth-device persistent ===
At the moment, it is not possible to have the commands needed for a persistent veth being made automatically be vzctl. A bugreport ( http<!--T://bugzilla.openvz.org/show_bug.cgi?id=301 ) has already been made. Until then, here's a way to make the above steps persistent.74--> 1. FirstIn Debian, edit configure the VE's configuration to specify what the veth's IP address(es) should be, and network interface on [[CT0]] to indicate that plug into a custom script should be run when starting up a VE.* Open up bridge in /etc/vznetwork/conf/VEIDinterfaces.conf* Comment out any IP_ADDRESS entries The [[CT0]] physical device is added to prevent a VENET-device from being created in the VE* Add or change bridge as the entry CONFIG_CUSTOMIZED="yesuplink"* Add an entry VETH_IP_ADDRESS="<VE IP>" The VE IP can have multiple IPs, separated by spaces 2. Now port to create that "custom script". The following helper script will check the configuration file for IP addresses and for the veth interface, and configure the IP routing accordinglyphysical network. Create the script /usr/sbin/vznetaddroute You need to have the following, and then <code>chmod 0500 /usr/sbin/vznetaddroute</code> bridge-utils installed for this to make it executablework.
<!--T:75-->
The bridge forwarding delay is set to 0 seconds so that forwarding begins immediately when a new interface is added to a bridge. The default delay is 30 seconds, during which the bridge pauses all traffic to listen and figure out where devices are. This can interrupt services when a container is added to the bridge. If you aren't running the spanning tree protocol (off by default) and the bridge does not create a loop in your network, then there is no need for a forwarding delay.
<pre>
#!/bin/bash# /usr/sbin/vznetaddroute# a script to bring up virtual network interfaces (veth's) in a VEiface eth0 inet manual
CONFIGFILE=/etc/vz/conf/$VEID<!--T:76-->auto vzbr0iface vzbr0 inet static bridge_ports eth0 bridge_fd 0 address 192.168.1.100 netmask 255.255.255.conf0 gateway 192.168.1. $CONFIGFILE254VZHOSTIF=`echo $NETIF |sed 's</^pre>Follow the steps below for making a veth bridge persistent with the included script. That will automatically add each container to the bridge when it is started. Finally, specify vzbr0 as the bridge when adding the network interface to a container, as describe above. No configuration is needed on [[CT0]] for forwarding packets, proxy arp or additional routes.*host_ifname=\The interface in each [[CT]] can be configured as desired. Everything "just works" according to normal network interface configuration and default routing rules. Note that as discussed in the troubleshooting section below, bridged packets by default pass through the FORWARD iptables chain. Take care when adding rules to that table that bridged packets are not mistakenly blocked. This behavior can be disabled, if desired (sysctl: <code>net.*\bridge.bridge-nf-call-iptables</code>),.*$/\1/g'`
if [ === Making a veth-device persistent === <! -n "$VETH_IP_ADDRESS" ]; then-T:77--> echo "According to $CONFIGFILE VE$VEID has These steps are no longer necessary, as the veth IPs configureddevice is automatically created when the container is started. They remain here as a reference." exit 1fi
if [ <! -n "$VZHOSTIF" ]; then-T:78--> echo "According to $CONFIGFILE VE$VEID has no http://bugzilla.openvz.org/show_bug.cgi?id=301 , a bug that stopped the veth interface configured.device persistent was "Obsoleted now when --veth_add/--veth_del are introduced" exit 1fi
for IP in $VETH_IP_ADDRESS; do echo "Adding interface $VZHOSTIF and route $IP for VE$VEID to CT0" /sbin/ifconfig $VZHOSTIF 0 echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/proxy_arp echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/forwarding /sbin/ip route add $IP dev $VZHOSTIFdone exit 0</pre!--T:79--3. Now create /etcSee http:/vz/vznet.conf containing the followingwiki. This is what defines the "custom script" as being the vznetaddroute which you just createdopenvz<pre>#!org/binw/bashEXTERNAL_SCRIPTindex.php?title=Virtual_Ethernet_device&diff=5990&oldid="/usr/sbin/vznetaddroute"</pre>5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.
4. Of course<!--T:80-->That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the VE's operating system will need to be interface is being configured with those IP address(es) as welland a new route being added. Consult And you should be able to ping the manual for your VE's OS for detailshost, and to enter the CT and use the network.
That's it=== Making a bridged veth-device persistent === <! At this point, when you restart the VE you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the VE and use the network.--T:81-->
=== Making a bridged veth<!--T:82--device persistent ===>Like the above example, here it is how to add the veth device to a bridge in a persistent way. vzctl doesn't offer an automatic function to do this.
1. First<!--T:83-->vzctl includes a 'vznetaddbr' script, edit which makes use of the VE's configuration to specify what is 'bridge'' parameter of the host bridge , and to indicate that a custom script should be run when starting up a VE.* Open up /etc/vz/conf/VEID.conf* Comment out any IP_ADDRESS entries to prevent a VENET-device from being created in the VE* Add or change the entry CONFIG_CUSTOMIZED="yes"* Add an entry VZHOSTBR="<bridge if>" which is the bridge interface (already configured and up), you want to extend-netif_add switch.
2. Now to <!--T:84-->Just create that "custom script". The following helper script will check the configuration file for the bridge interface name and for the veth interface, and add the interface to the bridge. Create the script /usretc/sbinvz/vznetaddbr to have vznet.conf containing the following, and then <code>chmod 0500 /usr/sbin/vznetaddbr</code> to make it executable.
<!--T:85-->
<pre>
#!/bin/bash# EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr# a script to add virtual network interfaces (veth's) in a VE to a bridge on CT0 CONFIGFILE=/etc/vz/conf/$VEID.conf. $CONFIGFILEVZHOSTIF=`echo $NETIF |sed 's/^.*host_ifname=\(.*\),.*$/\1/g'` if [ ! -n "$VZHOSTIF" ]; then echo "According to $CONFIGFILE VE$VEID has no veth interface configured." exit 1fi if [ ! -n "$VZHOSTBR" ]; then echo "According to $CONFIGFILE VE$VEID has no bridge interface configured." exit 1fi echo "Adding interface $VZHOSTIF to bridge $VZHOSTBR on CT0 for VE$VEID"/sbin/ifconfig $VZHOSTIF 0echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/proxy_arpecho 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/forwarding/usr/sbin/brctl addif $VZHOSTBR $VZHOSTIF exit 0
</pre>
3. Now create /etc/vz/vznet.conf containing the following. This is what defines the "custom script" as being the vznetaddbr which you <!--T:86-->Or just created.run command
<pre>
#!/bin/bashecho 'EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"' > /etc/vz/vznet.conf
</pre>
4. Of course, the VE<!--T:87-->The script uses 's operating system will need to have . Consult the manual for your VEvmbr0's OS for details. When the VE is started, the veth specified in the NETIF value as default bridge name when no bridge is added to the bridge specified. You can check this by doing <code>brctl show</code> Inside the VE you can configure the interface statically or using dhcp, as a real interface attached to a switch on the lan.
=== Virtual ethernet Ethernet devices + VLAN ===<!--T:88-->
This configuration can be done by adding vlan device to the previous configuration.
== See also ==<!--T:89-->
* [[Virtual network device]]
* [[Differences between venet and veth]]
* [[Using private IPs for Hardware Nodes]]
* Patch: [[Disable venet interface]]
* Troubleshooting: [[Bridge doesn't forward packets]]
== External links ==<!--T:90-->
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]
* [http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html 2 veth with 2 bridges setup]
* [https://forum.proxmox.com/threads/physical-host-with-2-nics-each-with-different-gateways.1733/#post-9287 Non default gateway for CentOS OpenVZ container] - this applies to BlueOnyx in Proxmox as well. | [[Media:TwoGWsPVECentOS.pdf|Cache]]
</translate>
[[Category: Networking]]
[[Category: HOWTO]]
20
edits