netdev
[Top] [All Lists]

Re: bonding vs 802.3ad/Cisco EtherChannel link agregation

To: Chris Friesen <cfriesen@xxxxxxxxxxxxxxxxxx>
Subject: Re: bonding vs 802.3ad/Cisco EtherChannel link agregation
From: Ben Greear <greearb@xxxxxxxxxxxxxxx>
Date: Mon, 16 Sep 2002 09:09:42 -0700
Cc: Cacophonix <cacophonix@xxxxxxxxx>, linux-net@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx
Organization: Candela Technologies
References: <20020913222213.69396.qmail@xxxxxxxxxxxxxxxxxxxxxxx> <3D85DB3D.DC65A80B@xxxxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.1b) Gecko/20020722
Chris Friesen wrote:
Cacophonix wrote:

--- Chris Friesen <cfriesen@xxxxxxxxxxxxxxxxxx> wrote:


This has always confused me.  Why doesn't the bonding driver try and spread
all the traffic over all the links?

Because then you risk heavy packet reordering within an individual flow,
which can be detrimental in some cases.
--karthik


I can see how it could make the receiving host work more on reassembly, but if 
throughput is key,
wouldn't you still end up better if you can push twice as many packets through 
the pipe?

Chris

Also, I notice lots of out-of-order packets on a single gigE link when running 
at high
speeds (SMP machine), so the kernel is still having to reorder quite a few 
packets.
Has anyone done any tests to see how much worse it is with dual-port bonding?

NAPI helps my problem, but does not make it go away entirely.

Ben




--
Ben Greear <greearb@xxxxxxxxxxxxxxx>       <Ben_Greear AT excite.com>
President of Candela Technologies Inc      http://www.candelatech.com
ScryMUD:  http://scry.wanfear.com     http://scry.wanfear.com/~greear



<Prev in Thread] Current Thread [Next in Thread>