Changes

Jump to: navigation, search

Multiple network interfaces and ARP flux

13 bytes added, 10:32, 14 June 2011
The Quick Fix: remove CONFIG_CUSTOMIZED
==Overview==This page discusses working with multiple network interfaces on the [[Hardware Node ]] (HN), how this results in ARP Flux, and how to address this.
==The Simple Case==
In the simple case you have multiple network interfaces on the HN, all with IP addresses in the same subnet. Each of your Virtual Environments (VE's) containers also have IP addresses in the same subnet. You don't care which interfaces your VE's containers use.
So, no action is required. Everything just works. Setup OpenVZ normally.
==A More Complex Case==
Let's say you have three network interfaces on the HN, all with IP addresses on the same subnet. Each of your VE's containers also have IP addresses on the same subnet. But now you ''do'' care which interface your VE's containers use.
For example, you want some of your VE's containers to always use <code>eth3</code>, and some to use <code>eth4</code>. But none of the VE container traffic should use <code>eth0</code>, which is reserved for use by the HN only. This makes sense if you have VE's containers that may generate or receive a lot of traffic and you don't want your remote administration of the server over <code>eth0 </code> to degrade or get blocked because of this.
===Example Network Setup===
To make this clear we'll use the following HN configuration. We'll also have another system to act as the client.
{| align="center" borderclass="1wikitable" cellpadding=5
! System !! Interface !! MAC Address !! IP Address
|-
The desired affect has been achieved. Only interface eth0 on the HN responds to the ARP message and the other interfaces are silent.
===Adding some VE'scontainers===Now that the HN is behaving as expected, let's add some VE's containers and see what happens.
====VE Network SetupContainer network setup====The case we're are addressing is when the VE's containers are on the same subnet as the HN. So we create two new VE's containers and assign the addresses as follows.
{| align="center" borderclass="1wikitable" cellpadding=5! VEID CTID || IP
|-
| 101 || 192.168.18.101
|}
====Example Three - VE container ARP Flux====From the client system on you should be able to ping both VE'scontainers. However, looking at the ARP traffic with tcpdump you'll see that once again the network address associated with each VE container will be subject to ARP flux, drifting between all three link layer addresses over time.
<pre>
</pre>
What this shows is that each VE's container IP address is associated with each HN's interface. Therefore each interface will respond to any ARP "who has" query.
====The Cause====
</pre>
In addition, the following ARP messages are sent when VEID CTID 101 is started.
<pre>
What we see here is the result of vzarpipdetect, another function in vps_functions called by vps-net_add. An ARP "who has" message is sent by each interface and answered by the other interfaces.
What we want is to only add the IP addresses of our VE's containers to specific devices, not to all devices. This will prevent the ARP flux problem for our VE'scontainers.
====The Quick Fix====
Manually editing the vzarp script is a quick fix, but not advised. Creating your own ''fork'' of OpenVZ is difficult to maintain and may have unintended side affects.
Fortunately there is a feature which will allow custom scripts to run during VE container startup.
This approach is also described in http://wiki.openvz.org/Virtual_Ethernet_device[[virtual Ethernet device]].
In the VE configuration file, /etc/vz/conf/$VEID.conf, add the line <pre>CONFIG_CUSTOMIZED="yes"</pre> Now create Create the file /etc/vz/vznet.conf or /etc/vz/vznetcfg. Note that this will only work with a recent version of OpenVZ (vzctl-3.0.14) as the change was introduced in December, 2006. The file name seems to have changed between the two listed here so some trial and error will may be required.
<pre>
==Testing Environment==
All of the examples have been generated and tested using Debian Etch for the HN and Debian Stable for the VE'scontainers. VMware Workstation was used to create the test networks. The client is the BackTrack live CD from Remote Exploit. If you have different results from other releases of Linux please edit this page.
[[Category:HOWTO]]
[[Category:Networking]]
[[Category:Debian]]

Navigation menu