On Fri, 2005-03-06 at 12:28 -0700, Mitch Williams wrote:
>
> On Fri, 3 Jun 2005, David S. Miller wrote:
>
> > From: jamal <hadi@xxxxxxxxxx>
> > Date: Fri, 03 Jun 2005 14:42:30 -0400
> >
> > > When you reduce the weight, the system is spending less time in the
> > > softirq processing packets before softirq yields. If this gives more
> > > opportunity to your app to run, then the performance will go up.
> > > Is this what you are seeing?
> >
> > Jamal, this is my current theory as well, we hit the jiffies
> > check.
>
> Well, I hate to mess up your guys' theories, but the real reason is
> simpler: hardware receive resources, specifically descriptors and
> buffers.
>
> In a typical NAPI polling loop, the driver processes receive packets until
> it either hits the quota or runs out of packets. Then, at the end of the
> loop, it returns all of those now-free receive resources back to the
> hardware.
>
> With a heavy receive load, the hardware will run out of receive
> descriptors in the time it takes the driver/NAPI/stack to process 64
> packets. So it drops them on the floor. And, as we know, dropped packets
> are A Bad Thing.
>
> By reducing the driver weight, we cause the driver to give receive
> resources back to the hardware more often, which prevents dropped packets.
>
> As Ben Greer noticed, increasing the number of descriptors can help with
> this issue. But it really can't eliminate the problem -- once the ring
> is full, it doesn't matter how big it is, it's still full.
>
> In my testing (Dual 2.8GHz Xeon, PCI-X bus, Gigabit network, 10 clients),
> I was able to completely eliminate dropped packets in most cases by
> reducing the driver weight down to about 20.
>
> Now for some speculation:
>
What you said above is unfortunately also speculation ;->
But one that you could validate by putting proper hooks. As an example,
try to restore a descriptor every time you pick one - for an example of
this look at the sb1250 driver.
cheers,
jamal
|