netdev
[Top] [All Lists]

Re: [PATCH] netif_rx: receive path optimization

To: hadi@xxxxxxxxxx
Subject: Re: [PATCH] netif_rx: receive path optimization
From: Stephen Hemminger <shemminger@xxxxxxxx>
Date: Wed, 30 Mar 2005 15:53:26 -0800
Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>, netdev <netdev@xxxxxxxxxxx>
In-reply-to: <1112219848.1078.93.camel@xxxxxxxxxxxxxxxx>
Organization: Open Source Development Lab
References: <20050330132815.605c17d0@xxxxxxxxxxxxxxxxx> <1112219848.1078.93.camel@xxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
On 30 Mar 2005 16:57:29 -0500
jamal <hadi@xxxxxxxxxx> wrote:

> On Wed, 2005-03-30 at 16:28, Stephen Hemminger wrote:
> > This patch cleans up the netif_rx and related code in the network
> > receive core.
> > 
> >      - Eliminate vestiges of fastroute.
> >        The leftover statistics no longer needed.
> > 
> >      - Get rid of high/med/low threshold return from netif_rx.
> >        Drivers rarely check return value of netif_rx, and those
> >        that do can handle the DROP vs SUCCESS return
> > 
> 
> Please leave this feature in. Drivers that used it have moved on to a
> better life under NAPI; however, it is still useful for anyone who wants
> to take  heed of congestion. And infact it is highly advisable for
> anyone not using NAPI to using it.
> In other words: the work should be to convert users of netif_rx and not
> to get rid of this feature.

How about percentages instead of multiple sysctl values? Or some relationship
of max_queue and max_backlog.
        success  qlen < max_backlog
        low      qlen > max_backlog
        medium   qlen > max_queue/2
        high     qlen > max_queue - max_backlog
        drop     qlen > max_queue

Also, RAND_LIE (dead code) is
kind of confusing because I expected it to be a receive version of Random
Drop, but it really just lies back to the caller (and keeps the packet).

<Prev in Thread] Current Thread [Next in Thread>