I am resending this note with the subject heading, so that
it can be viewed through the subject catagory.
> "David S. Miller" wrote:
>> NAPI is also not the panacea to all problems in the world.
>Mala did some testing on this a couple of weeks back. It appears that
>NAPI damaged performance significantly.
>Unfortunately it is not listed what e1000 and core NAPI
>patch was used. Also, not listed, are the RX/TX mitigation
>and ring sizes given to the kernel module upon loading.
The default driver that is included in 2.5.25 kernel for Intel
gigabit adapter was used for the baseline test and the NAPI driver
was downloaded from Robert Olsson's website. I have updated my web
page to include Robert's patch. However it is given there for reference
purpose only. Except for the ones mentioned explicitly the rest of
the configurable values used are default. The default for RX/TX mitigation
is 64 microseconds and the default ring size is 80.
I have added statistics collected during the test to my web site. I do
want to analyze and understand how NAPI can be improved in my tcp_stream
test. Last year around November, when I first tested NAPI, I did find NAPI
results better than the baseline using udp_stream. However I am
concentrating on tcp_stream since that is where NAPI can be improved in
my setup. I will update the website as I do more work on this.
>Robert can comment on optimal settings
I saw Robert's postings. Looks like he may have a more recent version of
driver than the one I used. I also see 2.5.33 has NAPI, I will move to
and continue my work on that.
>Robert and Jamal can make a more detailed analysis of Mala's
>graphs than I.
Jamal has questioned about socket buffer size that I used, I have tried
socket buffer size in the past and I didn't see much difference in my
I will add that to my list again.
IBM Linux Technology Center - Kernel Performance