[Top] [All Lists]

RE: High CPU utilization with Bonding driver ?

To: "'Arthur Kepner'" <akepner@xxxxxxx>
Subject: RE: High CPU utilization with Bonding driver ?
From: "Ravinandan Arakali" <ravinandan.arakali@xxxxxxxxxxxx>
Date: Tue, 29 Mar 2005 11:13:30 -0800
Cc: <netdev@xxxxxxxxxxx>, <bonding-devel@xxxxxxxxxxxxxxxxxxxxx>, "'Leonid. Grossman \(E-mail\)'" <leonid.grossman@xxxxxxxxxxxx>, "'Raghavendra. Koushik \(E-mail\)'" <raghavendra.koushik@xxxxxxxxxxxx>
Importance: Normal
In-reply-to: <>
Sender: netdev-bounce@xxxxxxxxxxx
Thanks for the reply.
Not yet. Will try out the patch.


-----Original Message-----
From: Arthur Kepner [mailto:akepner@xxxxxxx]
Sent: Tuesday, March 29, 2005 10:29 AM
To: Ravinandan Arakali
Cc: netdev@xxxxxxxxxxx; bonding-devel@xxxxxxxxxxxxxxxxxxxxx; Leonid.
Grossman (E-mail); Raghavendra. Koushik (E-mail)
Subject: Re: High CPU utilization with Bonding driver ?

On Tue, 29 Mar 2005, Ravinandan Arakali wrote:

> ....
> Results(8 nttcp/chariot streams):
> ---------------------------------
> 1. Combined throughputs(but no bonding):
> 3.1 + 6.2 = 9.3 Gbps with 58% CPU idle.
> 2. eth0 and eth1 bonded together in LACP mode:
> 8.2 Gbps with 1% CPU idle.
> From the above results, when Bonding driver is used(#2), the CPUs are
> completely maxed out compared to the case when traffic is run
> simultaneously on both the cards(#1).
> Can anybody suggest some reasons for the above behavior ?


Have you tried this patch?

If not, it will likely go a long way to solving your 


<Prev in Thread] Current Thread [Next in Thread>