[Top] [All Lists]

Re: [Bonding-devel] Re: [SET 2][PATCH 2/8][bonding] Propagating master's

To: hadi@xxxxxxxxxx
Subject: Re: [Bonding-devel] Re: [SET 2][PATCH 2/8][bonding] Propagating master's settings toslaves
From: Laurent DENIEL <laurent.deniel@xxxxxxxxxxxxx>
Date: Mon, 11 Aug 2003 16:07:45 +0200
Cc: shmulik.hen@xxxxxxxxx, bonding-devel@xxxxxxxxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx
Organization: THALES ATM
References: <E791C176A6139242A988ABA8B3D9B38A014C9474@xxxxxxxxxxxxxxxxxxxxxxx> <1060570284.1056.15.camel@xxxxxxxxxxxxxxxx> <200308111308.48263.shmulik.hen@xxxxxxxxx> <1060607079.1050.144.camel@xxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
jamal a écrit :
> > Trying to move that from the kernel module into the config application
> > seems to be a very hard task to implement since we'll have to find a
> > way to make the application constantly aware to the specifics like
> > current topology, slave-to-bond affiliation, updated status of each
> > slave, etc., etc. It would also mean that the driver will have to
> > wait for the application to tell it what to do each time it needs a
> > decision, and by that we'll surely suffer some performance hit and
> > probably get low availability or temporary loss of communications.
> >
> Not at all. If you let some app control this i am sure whoever writes
> the app has vested interest in getting fast failovers etc.

> Basically what i described at the top. Move any "richness" to user
> space.

HP/Compaq/Digital used to have the same approach with their Netrain
implementation, and from one release of Tru64 UNIX to another, they
could no longer support resolution ala milli-seconds but only seconds
due to the move of such "richness" to user space (among other things). 
I am not saying that doing so on Linux will result to the same, but 
a minimal failover policy shall remain in the kernel for performance 
reason ... (or a user space facility could exist to *configure* such
policy but without direct interaction with user space when the kernel
has to decide).


<Prev in Thread] Current Thread [Next in Thread>