Changes

Jump to: navigation, search

Bonding

4,387 bytes added, 13:41, 25 October 2006
no edit summary
Linux allows binding multiple network interfaces into a single channel/NIC.

== Introduction ==
The Linux bonding driver provides a method for aggregating
multiple network interfaces into a single logical "bonded" interface.
The behavior of the bonded interfaces depends upon the mode; generally
speaking, modes provide either hot standby or load balancing services.
Additionally, link integrity monitoring may be performed.

== Setting up bounding is with RHEL/CentOs v4.4 ==
**Step #1: Create a bond0 configuration file

Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:

# vi /etc/sysconfig/network-scripts/ifcfg-bond0

Append following lines to it:
*Static IP
<pre>
DEVICE=bond0
IPADDR=x.x.x.x
NETWORK=y.y.y.y
NETMASK=z.z.z.z
BOOTPROTO=none
ONBOOT=yes
</pre>
x.x.x.x is an IP address of HW.

y.y.y.y is an Network address of HW.

z.z.z.z is an net mask address of HW (usually 255.255.255.0).

Replace above IP data with your actual IP address. Save file and exit to shell prompt.

*DHCP
<pre>
DEVICE=bond0
BOOTPROTO=dhcp
ONBOOT=yes
</pre>

**Step #2: Modify eth0 and eth1 config files:

Open both configuration using vi text editor and make sure file read as follows for eth0 interface

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
<pre>
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
</pre>

Open eth1 configuration file using vi text editor:

# vi /etc/sysconfig/network-scripts/ifcfg-eth1

Make sure file read as follows for eth1 interface:
<pre>
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
</pre>
Save file and exit to shell prompt.

**Step # 3: Load bond driver/module
Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:

# vi /etc/modprobe.conf
<pre>
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
</pre>
You can learn more about all bounding options in kernel source documentation file "Documentation/networking/bonding.txt"

**Step # 4: Test configuration
First, load the bonding module:
<pre>
# modprobe bonding
</pre>
Restart networking service in order to up bond0 interface:
<pre>
# service network restart
</pre>
Check proc info:

# cat /proc/net/bonding/bond0
<pre>
Ethernet Channel Bonding Driver: v2.6.3 (June 8, 2005)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:07:d4:c3

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:07:d4:cd
</pre>

List all interfaces:
<pre>
#ip a

2: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
4: bond0: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue
link/ether 00:0c:29:73:26:19 brd ff:ff:ff:ff:ff:ff
inet 10.17.3.25/16 brd 10.17.255.255 scope global bond0
6: eth0: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
link/ether 00:0c:29:73:26:19 brd ff:ff:ff:ff:ff:ff
8: eth1: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
link/ether 00:0c:29:73:26:19 brd ff:ff:ff:ff:ff:ff
1: venet0: <BROADCAST,POINTOPOINT,NOARP,UP> mtu 1500 qdisc noqueue
link/void
</pre>
Route:
<pre>
# ip r

10.17.0.0/16 dev bond0 proto kernel scope link src 10.17.3.25
169.254.0.0/16 dev bond0 scope link
default via 10.17.0.1 dev bond0
</pre>
== Traffic shaping ==
* Virtuozzo traffic shaping tools
Just replace old netdev to new bonding device (bond0)

vi /etc/sysconfig/vz
<pre>
## Network traffic parameters
TRAFFIC_SHAPING=yes
BANDWIDTH="bond0:102400"
TOTALRATE="bond0:1:4096"
RATE="bond0:1:8"
</pre>
and do the rest as usuall
<pre>
# vzctl set $veid --ratebound $bound --rate $rif:$class:$rate --save
</pre>

* Traffic shaping with tc
Where is no specific here, see:[[Traffic_shaping_with_tc]]

As a result in:
<pre>
# ip a s bond0
4: bond0: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc cbq
##NOTE:Class Based Queueing tc was added ^^^^^^^^^^
link/ether 00:0c:29:07:d4:c3 brd ff:ff:ff:ff:ff:ff
inet 10.17.3.41/16 brd 10.17.255.255 scope global bond0
</pre>
3
edits

Navigation menu