I had similar problems with NAPI and DL2K. I was only able to "resolve" the
issue by forcing my application and the NIC to a single CPU using CPU
affinity
hacks.
-----Original Message-----
From: Ben Greear [mailto:greearb@xxxxxxxxxxxxxxx]
Sent: Monday, September 16, 2002 9:10 AM
To: Chris Friesen
Cc: Cacophonix; linux-net@xxxxxxxxxxxxxxx; netdev@xxxxxxxxxxx
Subject: Re: bonding vs 802.3ad/Cisco EtherChannel link agregation
Chris Friesen wrote:
> Cacophonix wrote:
>
>>--- Chris Friesen <cfriesen@xxxxxxxxxxxxxxxxxx> wrote:
>
>
>>>This has always confused me. Why doesn't the bonding driver try and
>>>spread all the traffic over all the links?
>>
>>Because then you risk heavy packet reordering within an individual
>>flow, which can be detrimental in some cases. --karthik
>
>
> I can see how it could make the receiving host work more on
> reassembly, but if throughput is key, wouldn't you still end up better
> if you can push twice as many packets through the pipe?
>
> Chris
Also, I notice lots of out-of-order packets on a single gigE link when
running at high speeds (SMP machine), so the kernel is still having to
reorder quite a few packets. Has anyone done any tests to see how much worse
it is with dual-port bonding?
NAPI helps my problem, but does not make it go away entirely.
Ben
>
--
Ben Greear <greearb@xxxxxxxxxxxxxxx> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
|