David S. Miller writes:
> Mala did some testing on this a couple of weeks back. It appears that
> NAPI damaged performance significantly.
> Robert can comment on optimal settings
I see other numbers so we have to sort out the differences. Andrew Morton
pinged me about this test last week. So I've had a chance to run some tests.
Scale to CPU can be dangerous measure w. NAPI due to its adapting behaviour
where RX interrupts decreases in favour of successive polls.
And NAPI scheme behaves different since we can not assume that all network
traffic is well-behaved like TCP. System has to be manageable and to "perform"
under any network load not only for well-behaved TCP. So of course we will
see some differences -- there are no free lunch. Simply we can not blindly
look at one test. IMO NAPI is the best overall performer. The number speaks
Here is the most recent test...
NAPI kernel path is included in 2.4.20-pre4 the comparison below is mainly
between e1000 driver w and w/o NAPI and the NAPI port to e1000 is still
Linux 2.4.20-pre4/UP PIII @ 933 MHz w. Intel's e100 2 port GIGE adapter.
e1000 4.3.2-k1 (current kernel version) and current NAPI patch. For NAPI
e1000 driver uses RxIntDelay=1. RxIntDewlay=0 caused problem. Non-NAPI
driver RxIntDelay=64. (default)
Three tests: TCP, UDP, packet forwarding.
Netperf. TCP socket size 131070, Single TCP stream. Test length 30 s.
M-size e1000 NAPI-e1000
4 20.74 20.69 Mbit/s data received.
128 458.14 465.26
512 836.40 846.71
1024 936.11 937.93
2048 940.65 939.92
4096 940.86 937.59
8192 940.87 939.95
16384 940.88 937.61
32768 940.89 939.92
65536 940.90 939.48
131070 940.84 939.74
Netperf. UDP_STREAM. 1440 pkts. Single UDP stream. Test length 30 s.
955.7 955.7 Mbit/s data received.
Forwarding test. 1 Mpkts at 970 kpps injected.
T-put 305 298580 pkts routed.
With non-NAPI driver this system is "dead" an performes nothing.