>> But what if the backlog queue is empty all the time? Then NAPI thinks
>> that the system is idle, and reenables the interrupts after each packet :-(
Yes and this happens even with without NAPI. Just set RxIntDelay=X and send
pkts at >= X+1 interval.
>> Dave, do you have interrupt rates from the clients with and without NAPI?
> Robert does.
Yes we get into this interesting discussion now... Since with NAPI we can
safely use RxIntDelay=0 (e1000 terminologi). With the classical IRQ we simply
had to add latency (RxIntDelay of 64-128 us common for GIGE) this just to
survive at higher speeds (GIGE max is 1.48 Mpps) and with the interrupt
also comes higher network latencies... IMO this delay was a "work-around"
for the old interrupt scheme.
So we now have the option of removing it... But we are trading less latency
for for more interrupts. So yes Manfred is correct...
So is there a decent setting/compromise?
Well first approximation is just to do just what DaveM suggested.
RxIntDelay=0. This solved many problems with buggy hardware and complicated
tuning and RxIntDelay used to be combined with other mitigation parameters to
compensate for different packets sizes etc. This leading to very "fragile"
performance where a NIC could perform excellent w. single TCP stream but
to be seriously broke in many other tests. So tuning to just one "test"
can cause a lot of mis-tuning as well.
Anyway. A tulip NAPI variant added mitigation when we reached "some load" to
avoid the static interrupt delay. (Still keeping things pretty simple):
Lo 1) RxIntDelay=0
Mid 2) RxIntDelay=fix (When we had X pkts on the RX ring)
Hi 3) Consecutive polling. No RX interrupts.
Is it worth the effort?
For SMP w/o affinity the delay could eventually reduce the cache bouncing
since the packets becomes more "batched" at cost the of latency of course.
We use RxIntDelay=0 for production use. (IP-forwarding on UP)