Difference between revisions of "Multiple network interfaces and ARP flux"
m (→The Simple Case: ve->container) |
(→The Quick Fix: remove CONFIG_CUSTOMIZED) |
||
(5 intermediate revisions by 3 users not shown) | |||
Line 13: | Line 13: | ||
Let's say you have three network interfaces on the HN, all with IP addresses on the same subnet. Each of your containers also have IP addresses on the same subnet. But now you ''do'' care which interface your containers use. | Let's say you have three network interfaces on the HN, all with IP addresses on the same subnet. Each of your containers also have IP addresses on the same subnet. But now you ''do'' care which interface your containers use. | ||
− | For example, you want some of your containers to always use <code>eth3</code>, and some to use <code>eth4</code>. But none of the | + | For example, you want some of your containers to always use <code>eth3</code>, and some to use <code>eth4</code>. But none of the container traffic should use <code>eth0</code>, which is reserved for use by the HN only. This makes sense if you have containers that may generate or receive a lot of traffic and you don't want your remote administration of the server over <code>eth0</code> to degrade or get blocked because of this. |
===Example Network Setup=== | ===Example Network Setup=== | ||
Line 169: | Line 169: | ||
Now that the HN is behaving as expected, let's add some containers and see what happens. | Now that the HN is behaving as expected, let's add some containers and see what happens. | ||
− | ==== | + | ==== Container network setup==== |
− | The case we are addressing is when the containers are on the same subnet as the HN. | + | The case we are addressing is when the containers are on the same subnet as the HN. So we create two new containers and assign the addresses as follows. |
{| align="center" class="wikitable" | {| align="center" class="wikitable" | ||
Line 180: | Line 180: | ||
|} | |} | ||
− | ====Example Three - | + | ====Example Three - container ARP Flux==== |
− | From the client system on you should be able to ping both containers. However, looking at the ARP traffic with tcpdump you'll see that once again the network address associated with each | + | From the client system on you should be able to ping both containers. However, looking at the ARP traffic with tcpdump you'll see that once again the network address associated with each container will be subject to ARP flux, drifting between all three link layer addresses over time. |
<pre> | <pre> | ||
Line 225: | Line 225: | ||
</pre> | </pre> | ||
− | What this shows is that each | + | What this shows is that each container IP address is associated with each HN interface. Therefore each interface will respond to any ARP "who has" query. |
====The Cause==== | ====The Cause==== | ||
Line 265: | Line 265: | ||
Manually editing the vzarp script is a quick fix, but not advised. Creating your own ''fork'' of OpenVZ is difficult to maintain and may have unintended side affects. | Manually editing the vzarp script is a quick fix, but not advised. Creating your own ''fork'' of OpenVZ is difficult to maintain and may have unintended side affects. | ||
− | Fortunately there is a feature which will allow custom scripts to run during | + | Fortunately there is a feature which will allow custom scripts to run during container startup. |
This approach is also described in [[virtual Ethernet device]]. | This approach is also described in [[virtual Ethernet device]]. | ||
− | + | Create the file /etc/vz/vznet.conf or /etc/vz/vznetcfg. Note that this will only work with a recent version of OpenVZ (vzctl-3.0.14) as the change was introduced in December, 2006. The file name seems to have changed between the two listed here so some trial and error may be required. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<pre> | <pre> |
Latest revision as of 10:32, 14 June 2011
This page discusses working with multiple network interfaces on the Hardware Node (HN), how this results in ARP Flux, and how to address this.
Contents
The Simple Case
In the simple case you have multiple network interfaces on the HN, all with IP addresses in the same subnet. Each of your containers also have IP addresses in the same subnet. You don't care which interfaces your containers use.
So, no action is required. Everything just works. Setup OpenVZ normally.
The only downside is ARP flux. This describes the usually harmless condition where the network address (layer 3) drifts between multiple hardware addresses (layer 2). While this may cause some confusion to anyone trouble shooting, or generate alarms on network monitoring systems, it doesn't interrupt network traffic.
For an example of what this may look like, see the example and tcpdump captures below.
A More Complex Case
Let's say you have three network interfaces on the HN, all with IP addresses on the same subnet. Each of your containers also have IP addresses on the same subnet. But now you do care which interface your containers use.
For example, you want some of your containers to always use eth3
, and some to use eth4
. But none of the container traffic should use eth0
, which is reserved for use by the HN only. This makes sense if you have containers that may generate or receive a lot of traffic and you don't want your remote administration of the server over eth0
to degrade or get blocked because of this.
Example Network Setup
To make this clear we'll use the following HN configuration. We'll also have another system to act as the client.
System | Interface | MAC Address | IP Address |
---|---|---|---|
HN | eth0 | 00:0c:29:b3:a2:54 | 192.168.18.10 |
HN | eth3 | 00:0c:29:b3:a2:68 | 192.168.18.11 |
HN | eth4 | 00:0c:29:b3:a2:5e | 192.168.18.12 |
client | eth0 | 00:0c:29:d2:c7:aa | 192.168.18.129 |
HN ARP Flux
The first issue is fixing the ARP flux noted above. Any client on the network broadcasting an ARP "who has" message for any of the HN addresses will receive replies from all three interfaces. This results in IP addresses that float between three MAC addresses, depending on which response a client accepts first.
Example One - HN ARP Flux
For example, the following is a tcpdump capture from executing ping -c2 192.168.18.10
from the client system.
00:0c:29:d2:c7:aa > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.10 tell 192.168.18.129 00:0c:29:b3:a2:5e > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.10 is-at 00:0c:29:b3:a2:5e 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.10 is-at 00:0c:29:b3:a2:54 00:0c:29:b3:a2:68 > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.10 is-at 00:0c:29:b3:a2:68 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:5e, IPv4, length 98: 192.168.18.129 > 192.168.18.10: ICMP echo request, id 32313, seq 1, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, IPv4, length 98: 192.168.18.10 > 192.168.18.129: ICMP echo reply, id 32313, seq 1, length 64 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:5e, IPv4, length 98: 192.168.18.129 > 192.168.18.10: ICMP echo request, id 32313, seq 2, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, IPv4, length 98: 192.168.18.10 > 192.168.18.129: ICMP echo reply, id 32313, seq 2, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, ARP, length 60: arp who-has 192.168.18.129 tell 192.168.18.10 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, ARP, length 60: arp reply 192.168.18.129 is-at 00:0c:29:d2:c7:aa
The ARP "who has" message generated replies from all three MAC addresses on the HN. In this case the client took the MAC address for eth4. The three ICMP messages are then sent to eth4, but all the replies come from eth0. Normally this behavior isn't a problem, though it may generate some false alarms for a network monitor as it appears someone could be executing a man in the middle attack.
The following output is from executing this command on the HN.
sysctl -a | grep net.ipv4.conf.*.arp
net.ipv4.conf.venet0.arp_accept = 0 net.ipv4.conf.venet0.arp_ignore = 0 net.ipv4.conf.venet0.arp_announce = 0 net.ipv4.conf.venet0.arp_filter = 0 net.ipv4.conf.venet0.proxy_arp = 0 net.ipv4.conf.eth4.arp_accept = 0 net.ipv4.conf.eth4.arp_ignore = 0 net.ipv4.conf.eth4.arp_announce = 0 net.ipv4.conf.eth4.arp_filter = 0 net.ipv4.conf.eth4.proxy_arp = 0 net.ipv4.conf.eth3.arp_accept = 0 net.ipv4.conf.eth3.arp_ignore = 0 net.ipv4.conf.eth3.arp_announce = 0 net.ipv4.conf.eth3.arp_filter = 0 net.ipv4.conf.eth3.proxy_arp = 0 net.ipv4.conf.eth0.arp_accept = 0 net.ipv4.conf.eth0.arp_ignore = 0 net.ipv4.conf.eth0.arp_announce = 0 net.ipv4.conf.eth0.arp_filter = 0 net.ipv4.conf.eth0.proxy_arp = 0 net.ipv4.conf.lo.arp_accept = 0 net.ipv4.conf.lo.arp_ignore = 0 net.ipv4.conf.lo.arp_announce = 0 net.ipv4.conf.lo.arp_filter = 0 net.ipv4.conf.lo.proxy_arp = 0 net.ipv4.conf.default.arp_accept = 0 net.ipv4.conf.default.arp_ignore = 0 net.ipv4.conf.default.arp_announce = 0 net.ipv4.conf.default.arp_filter = 0 net.ipv4.conf.default.proxy_arp = 0 net.ipv4.conf.all.arp_accept = 0 net.ipv4.conf.all.arp_ignore = 0 net.ipv4.conf.all.arp_announce = 0 net.ipv4.conf.all.arp_filter = 0 net.ipv4.conf.all.proxy_arp = 0
A Simple Fix That May Work
If all three network interfaces are on different IP networks (such as 10.x.x.x, 172.16.x.x, 192.168.x.x) then executing the following will work:
sysctl -w net.ipv4.conf.all.arp_filter=1
However, if they are all on the same IP network, which is the case here, then this won't achieve the desired results.
A More Effective Solution
The following can be added to your /etc/sysctl.conf file once you've tested it.
sysctl -w net.ipv4.conf.all.arp_ignore=1 sysctl -w net.ipv4.conf.all.arp_announce=2
The following output is from executing this command on the HN.
sysctl -a | grep net.ipv4.conf.*.arp
net.ipv4.conf.venet0.arp_accept = 0 net.ipv4.conf.venet0.arp_ignore = 0 net.ipv4.conf.venet0.arp_announce = 0 net.ipv4.conf.venet0.arp_filter = 0 net.ipv4.conf.venet0.proxy_arp = 0 net.ipv4.conf.eth4.arp_accept = 0 net.ipv4.conf.eth4.arp_ignore = 0 net.ipv4.conf.eth4.arp_announce = 0 net.ipv4.conf.eth4.arp_filter = 0 net.ipv4.conf.eth4.proxy_arp = 0 net.ipv4.conf.eth3.arp_accept = 0 net.ipv4.conf.eth3.arp_ignore = 0 net.ipv4.conf.eth3.arp_announce = 0 net.ipv4.conf.eth3.arp_filter = 0 net.ipv4.conf.eth3.proxy_arp = 0 net.ipv4.conf.eth0.arp_accept = 0 net.ipv4.conf.eth0.arp_ignore = 0 net.ipv4.conf.eth0.arp_announce = 0 net.ipv4.conf.eth0.arp_filter = 0 net.ipv4.conf.eth0.proxy_arp = 0 net.ipv4.conf.lo.arp_accept = 0 net.ipv4.conf.lo.arp_ignore = 0 net.ipv4.conf.lo.arp_announce = 0 net.ipv4.conf.lo.arp_filter = 0 net.ipv4.conf.lo.proxy_arp = 0 net.ipv4.conf.default.arp_accept = 0 net.ipv4.conf.default.arp_ignore = 0 net.ipv4.conf.default.arp_announce = 0 net.ipv4.conf.default.arp_filter = 0 net.ipv4.conf.default.proxy_arp = 0 net.ipv4.conf.all.arp_accept = 0 net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.all.arp_filter = 0 net.ipv4.conf.all.proxy_arp = 0
Example Two - HN ARP Flux Corrected
Now we repeat the ping command, after the arp cache on the client has been cleared.
00:0c:29:d2:c7:aa > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.10 tell 192.168.18.129 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.10 is-at 00:0c:29:b3:a2:54 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, IPv4, length 98: 192.168.18.129 > 192.168.18.10: ICMP echo request, id 32066, seq 1, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, IPv4, length 98: 192.168.18.10 > 192.168.18.129: ICMP echo reply, id 32066, seq 1, length 64 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, IPv4, length 98: 192.168.18.129 > 192.168.18.10: ICMP echo request, id 32066, seq 2, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, IPv4, length 98: 192.168.18.10 > 192.168.18.129: ICMP echo reply, id 32066, seq 2, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, ARP, length 60: arp who-has 192.168.18.129 tell 192.168.18.10 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, ARP, length 60: arp reply 192.168.18.129 is-at 00:0c:29:d2:c7:aa
The desired affect has been achieved. Only interface eth0 on the HN responds to the ARP message and the other interfaces are silent.
Adding some containers
Now that the HN is behaving as expected, let's add some containers and see what happens.
Container network setup
The case we are addressing is when the containers are on the same subnet as the HN. So we create two new containers and assign the addresses as follows.
CTID | IP |
---|---|
101 | 192.168.18.101 |
102 | 192.168.18.102 |
Example Three - container ARP Flux
From the client system on you should be able to ping both containers. However, looking at the ARP traffic with tcpdump you'll see that once again the network address associated with each container will be subject to ARP flux, drifting between all three link layer addresses over time.
00:0c:29:d2:c7:aa > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 tell 192.168.18.129 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:54 00:0c:29:b3:a2:68 > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:68 00:0c:29:b3:a2:5e > 00:0c:29:d2:c7:aa, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:5e 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, IPv4, length 98: 192.168.18.129 > 192.168.18.101: ICMP echo request, id 43311, seq 1, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, IPv4, length 98: 192.168.18.101 > 192.168.18.129: ICMP echo reply, id 43311, seq 1, length 64 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, IPv4, length 98: 192.168.18.129 > 192.168.18.101: ICMP echo request, id 43311, seq 2, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, IPv4, length 98: 192.168.18.101 > 192.168.18.129: ICMP echo reply, id 43311, seq 2, length 64 00:0c:29:b3:a2:54 > 00:0c:29:d2:c7:aa, ARP, length 60: arp who-has 192.168.18.129 tell 192.168.18.10 00:0c:29:d2:c7:aa > 00:0c:29:b3:a2:54, ARP, length 60: arp reply 192.168.18.129 is-at 00:0c:29:d2:c7:aa
The ARP Cache
The reasons for this can be found from executing the following command on the HN to display the ARP cache.
arp -an
? (192.168.18.129) at 00:0C:29:D2:C7:AA [ether] on eth0 ? (192.168.18.102) at <from_interface> PERM PUB on eth3 ? (192.168.18.102) at <from_interface> PERM PUB on eth4 ? (192.168.18.102) at <from_interface> PERM PUB on eth0 ? (192.168.18.101) at <from_interface> PERM PUB on eth3 ? (192.168.18.101) at <from_interface> PERM PUB on eth4 ? (192.168.18.101) at <from_interface> PERM PUB on eth0
Another view is obtained from the following command on the HN.
cat /proc/net/arp
IP address HW type Flags HW address Mask Device 192.168.18.102 0x1 0xc 00:00:00:00:00:00 * eth3 192.168.18.102 0x1 0xc 00:00:00:00:00:00 * eth4 192.168.18.102 0x1 0xc 00:00:00:00:00:00 * eth0 192.168.18.101 0x1 0xc 00:00:00:00:00:00 * eth3 192.168.18.101 0x1 0xc 00:00:00:00:00:00 * eth4 192.168.18.101 0x1 0xc 00:00:00:00:00:00 * eth0
What this shows is that each container IP address is associated with each HN interface. Therefore each interface will respond to any ARP "who has" query.
The Cause
These entries are created by the vzarp function in the vps_functions script, which are called by vps-net_add, vps-net_del and vps-stop. The result of this function in our case is to execute the following commands:
/sbin/ip neigh add proxy 192.168.18.101 dev eth0 /sbin/ip neigh add proxy 192.168.18.101 dev eth4 /sbin/ip neigh add proxy 192.168.18.101 dev eth3 /sbin/ip neigh add proxy 192.168.18.102 dev eth0 /sbin/ip neigh add proxy 192.168.18.102 dev eth4 /sbin/ip neigh add proxy 192.168.18.102 dev eth3
In addition, the following ARP messages are sent when CTID 101 is started.
00:0c:29:b3:a2:54 > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 (ff:ff:ff:ff:ff:ff) tell 192.168.18.10 00:0c:29:b3:a2:5e > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 (ff:ff:ff:ff:ff:ff) tell 192.168.18.12 00:0c:29:b3:a2:68 > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 (ff:ff:ff:ff:ff:ff) tell 192.168.18.11 00:0c:29:b3:a2:54 > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 (ff:ff:ff:ff:ff:ff) tell 192.168.18.101 00:0c:29:b3:a2:5e > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 (ff:ff:ff:ff:ff:ff) tell 192.168.18.101 00:0c:29:b3:a2:68 > ff:ff:ff:ff:ff:ff, ARP, length 60: arp who-has 192.168.18.101 (ff:ff:ff:ff:ff:ff) tell 192.168.18.101 00:0c:29:b3:a2:5e > 00:0c:29:b3:a2:68, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:5e 00:0c:29:b3:a2:5e > 00:0c:29:b3:a2:54, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:5e 00:0c:29:b3:a2:68 > 00:0c:29:b3:a2:54, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:68 00:0c:29:b3:a2:68 > 00:0c:29:b3:a2:5e, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:68 00:0c:29:b3:a2:54 > 00:0c:29:b3:a2:5e, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:54 00:0c:29:b3:a2:54 > 00:0c:29:b3:a2:68, ARP, length 60: arp reply 192.168.18.101 is-at 00:0c:29:b3:a2:54
What we see here is the result of vzarpipdetect, another function in vps_functions called by vps-net_add. An ARP "who has" message is sent by each interface and answered by the other interfaces.
What we want is to only add the IP addresses of our containers to specific devices, not to all devices. This will prevent the ARP flux problem for our containers.
The Quick Fix
Unfortunately this involves editing the OpenVZ scripts. The only case we really care about is vps-net_add, as the others execute ip neigh del proxy
.
Manually editing the vzarp script is a quick fix, but not advised. Creating your own fork of OpenVZ is difficult to maintain and may have unintended side affects.
Fortunately there is a feature which will allow custom scripts to run during container startup.
This approach is also described in virtual Ethernet device.
Create the file /etc/vz/vznet.conf or /etc/vz/vznetcfg. Note that this will only work with a recent version of OpenVZ (vzctl-3.0.14) as the change was introduced in December, 2006. The file name seems to have changed between the two listed here so some trial and error may be required.
#!/bin/bash EXTERNAL_SCRIPT="/usr/lib/vzctl/scripts/vznet-custom"
Finally create the file /usr/lib/vzctl/scripts/vznet-custom and add your custom commands.
TODO: Discuss contents of script and provide examples. Still need to work on getting this to work.
Testing Environment
All of the examples have been generated and tested using Debian Etch for the HN and Debian Stable for the containers. VMware Workstation was used to create the test networks. The client is the BackTrack live CD from Remote Exploit. If you have different results from other releases of Linux please edit this page.