netdev
[Top] [All Lists]

Re: High CPU utilization with Bonding driver ?

To: Ravinandan Arakali <ravinandan.arakali@xxxxxxxxxxxx>
Subject: Re: High CPU utilization with Bonding driver ?
From: Arthur Kepner <akepner@xxxxxxx>
Date: Tue, 29 Mar 2005 10:29:07 -0800 (PST)
Cc: netdev@xxxxxxxxxxx, bonding-devel@xxxxxxxxxxxxxxxxxxxxx, "Leonid. Grossman (E-mail)" <leonid.grossman@xxxxxxxxxxxx>, "Raghavendra. Koushik (E-mail)" <raghavendra.koushik@xxxxxxxxxxxx>
In-reply-to: <001601c5348c$3f417f50$3a10100a@xxxxxxxxxxx>
References: <001601c5348c$3f417f50$3a10100a@xxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
On Tue, 29 Mar 2005, Ravinandan Arakali wrote:

> ....
> Results(8 nttcp/chariot streams):
> ---------------------------------
> 1. Combined throughputs(but no bonding):
> 3.1 + 6.2 = 9.3 Gbps with 58% CPU idle.
> 
> 2. eth0 and eth1 bonded together in LACP mode:
> 8.2 Gbps with 1% CPU idle.
> 
> From the above results, when Bonding driver is used(#2), the CPUs are
> completely maxed out compared to the case when traffic is run
> simultaneously on both the cards(#1).
> Can anybody suggest some reasons for the above behavior ?
> 

Ravi;

Have you tried this patch? 

http://marc.theaimsgroup.com/?l=linux-netdev&m=111091146828779&w=2

If not, it will likely go a long way to solving your 
problem.

--
Arthur

<Prev in Thread] Current Thread [Next in Thread>