David S. Miller wrote:
I've read his paper: it's about MLFFR. There is no alternative to NAPI
if packets arrive faster than they are processed by the backlog queue.
From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
Date: Fri, 06 Sep 2002 20:35:08 +0200
The second point was that interrupt mitigation must remain enabled, even
with NAPI: the automatic mitigation doesn't work with process space
limited loads (e.g. TCP: backlog queue is drained quickly, but the
system is busy processing the prequeue or receive queue)
Not true. NAPI is in fact a %100 replacement for hw interrupt
mitigation strategies. The cpu usage elimination afforded by
hw interrupt mitigation is also afforded by NAPI and even more
so by NAPI.
See Jamal's paper.
But what if the backlog queue is empty all the time? Then NAPI thinks
that the system is idle, and reenables the interrupts after each packet :-(
In my tests, I've used a pentium class system (I have no GigE cards -
that was the only system where I could saturate the cpu with 100MBit
ethernet). IIRC 30% cpu time was needed for the copy_to_user(). The
receive queue was filled, the backlog queue empty. With NAPI, I got 1
interrupt for each packet, with hw interrupt mitigation the throughput
was 30% higher for MTU 600.
Dave, do you have interrupt rates from the clients with and without NAPI?