netdev
[Top] [All Lists]

Re: RFC: NAPI packet weighting patch

To: shemminger@xxxxxxxx
Subject: Re: RFC: NAPI packet weighting patch
From: "David S. Miller" <davem@xxxxxxxxxxxxx>
Date: Thu, 02 Jun 2005 14:40:03 -0700 (PDT)
Cc: john.ronciak@xxxxxxxxx, hadi@xxxxxxxxxx, jdmason@xxxxxxxxxx, mitch.a.williams@xxxxxxxxx, netdev@xxxxxxxxxxx, Robert.Olsson@xxxxxxxxxxx, ganesh.venkatesan@xxxxxxxxx, jesse.brandeburg@xxxxxxxxx
In-reply-to: <20050602143126.7c302cfd@dxpl.pdx.osdl.net>
References: <468F3FDA28AA87429AD807992E22D07E0450BFD0@orsmsx408> <20050602143126.7c302cfd@dxpl.pdx.osdl.net>
Sender: netdev-bounce@xxxxxxxxxxx
From: Stephen Hemminger <shemminger@xxxxxxxx>
Date: Thu, 2 Jun 2005 14:31:26 -0700

> For networking the problem is worse, the "right" choice depends on workload
> and relationship between components in the system. I can't see how you could
> ever expect a driver specific solution.   

I totally agree, even the mere concept of driver-centric decisions
in this area is pretty bogus.

> And for other workloads, and other systems (think about the Altix with
> long access latencies), your numbers will be wrong. Perhaps we need
> to quit trying for a perfect solution and just get a "good enough" one
> that works.

I don't understand why nobody is investigating doing this stuff
by generic measurements that the core kernel can perform.

The generic ->poll() runner code can say, wow it took N-usec to
process M packets, perhaps I should adjust the weight.

I haven't seen one concrete suggestion along those lines, yet that is
where the answer to this kind of stuff is.  Those kinds of solutions
are completely CPU, memory, I/O bus, network device, and workload
independant.

<Prev in Thread] Current Thread [Next in Thread>