netdev
[Top] [All Lists]

High CPU utilization with Bonding driver ?

To: <netdev@xxxxxxxxxxx>, <bonding-devel@xxxxxxxxxxxxxxxxxxxxx>
Subject: High CPU utilization with Bonding driver ?
From: "Ravinandan Arakali" <ravinandan.arakali@xxxxxxxxxxxx>
Date: Tue, 29 Mar 2005 10:22:19 -0800
Cc: "Leonid. Grossman \(E-mail\)" <leonid.grossman@xxxxxxxxxxxx>, "Raghavendra. Koushik \(E-mail\)" <raghavendra.koushik@xxxxxxxxxxxx>
Importance: Normal
In-reply-to:
Sender: netdev-bounce@xxxxxxxxxxx
Hi,
We are facing the following problem with bonding driver with two 10-gigabit
ethernet cards.
Any help is greatly appreciated.

Configuration:
--------------
Server:  Four processor AMD Opteron running 2.6.5 kernel
Switch:  Foundry stackable switch
Clients: Two Opteron systems, each with one 10-gigabit card.
Bonding: Two 10G cards bonded in LACP mode. One card in 133 MHz slot,
         the other in 100 MHz slot(though we suspect the latter is scaling
         down to 66 MHz)

Results(8 nttcp/chariot streams):
---------------------------------
1. Combined throughputs(but no bonding):
3.1 + 6.2 = 9.3 Gbps with 58% CPU idle.

2. eth0 and eth1 bonded together in LACP mode:
8.2 Gbps with 1% CPU idle.

From the above results, when Bonding driver is used(#2), the CPUs are
completely maxed out compared to the case when traffic is run
simultaneously on both the cards(#1).
Can anybody suggest some reasons for the above behavior ?

Thanks,
Ravi



<Prev in Thread] Current Thread [Next in Thread>