Difference between revisions of "Bonding"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (Modify eth0 and eth1 config files)
(remove Virtuozzo specifics -- no need to have here)
Line 140: Line 140:
  
 
==  Traffic shaping ==
 
==  Traffic shaping ==
=== Virtuozzo traffic shaping tools ===
 
Just replace old netdev to new bonding device (bond0) 
 
  
vi /etc/sysconfig/vz
 
<pre>
 
## Network traffic parameters
 
TRAFFIC_SHAPING=yes
 
BANDWIDTH="bond0:102400"
 
TOTALRATE="bond0:1:4096"
 
RATE="bond0:1:8"
 
</pre>
 
and do the rest as usuall
 
<pre>
 
# vzctl set $veid --ratebound $bound --rate $rif:$class:$rate --save
 
</pre>
 
 
=== Traffic shaping with tc ===
 
 
Where is no specifics here, see [[Traffic shaping with tc]].
 
Where is no specifics here, see [[Traffic shaping with tc]].
  

Revision as of 12:13, 29 January 2008

Linux allows binding multiple network interfaces into a single channel/NIC.

Introduction

The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.

Setting up bounding on a RHEL/CentOS 4 system

Create a bond0 configuration file

Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:

# vi /etc/sysconfig/network-scripts/ifcfg-bond0

Append following lines to it:

In case of static IP

DEVICE=bond0
IPADDR=x.x.x.x
NETWORK=y.y.y.y
NETMASK=z.z.z.z
BOOTPROTO=none
ONBOOT=yes

x.x.x.x is an IP address of HW.

y.y.y.y is an Network address of HW.

z.z.z.z is an net mask address of HW (usually 255.255.255.0).

Replace above IP data with your actual IP address. Save file and exit to shell prompt.

In case of DHCP

DEVICE=bond0
BOOTPROTO=dhcp
ONBOOT=yes

Modify eth0 and eth1 config files

Open both configurations using vi (text editor) and make sure the file reads as follows for interface eth0

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Open the eth1 configuration file using vi :

vi /etc/sysconfig/network-scripts/ifcfg-eth1

Make sure the file reads as follows for interface eth1:

DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Save the file and exit to shell prompt.

Load bond driver/module

Make sure the bonding module is loaded before the channel-bonding interface (bond0) is brought up. You need to modify the kernel modules configuration file:

# vi /etc/modprobe.conf
alias bond0 bonding
options bond0 miimon=100

You can learn more about bonding options in the kernel source documentation file "Documentation/networking/bonding.txt"

Test the configuration

First, load the bonding module:

# modprobe bonding

Restart networking service in order to up bond0 interface:

# service network restart

Check proc info:

# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v2.6.3 (June 8, 2005)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:07:d4:c3

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:07:d4:cd

List all interfaces:

#ip a

2: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
4: bond0: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue 
    link/ether 00:0c:29:73:26:19 brd ff:ff:ff:ff:ff:ff
    inet 10.17.3.25/16 brd 10.17.255.255 scope global bond0
6: eth0: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:0c:29:73:26:19 brd ff:ff:ff:ff:ff:ff
8: eth1: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:0c:29:73:26:19 brd ff:ff:ff:ff:ff:ff
1: venet0: <BROADCAST,POINTOPOINT,NOARP,UP> mtu 1500 qdisc noqueue 
    link/void

Route:

# ip r

10.17.0.0/16 dev bond0  proto kernel  scope link  src 10.17.3.25 
169.254.0.0/16 dev bond0  scope link 
default via 10.17.0.1 dev bond0

Traffic shaping

Where is no specifics here, see Traffic shaping with tc.

As a result in:

# ip a s  bond0
4: bond0: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc cbq 
##NOTE:Class Based Queueing tc was added          ^^^^^^^^^^  
    link/ether 00:0c:29:07:d4:c3 brd ff:ff:ff:ff:ff:ff
    inet 10.17.3.41/16 brd 10.17.255.255 scope global bond0