jamal wrote:
On Fri, 2005-03-04 at 03:47, Baruch Even wrote:
Your experiment is more than likely a single flow, correct?
In other words the whole queue was infact dedicated just for your one Indeed, For a router or a web server handling several thousand flows it might be different, but I don't expect it handles a single packet in one ms (or more) as it happens for the current end-system ack handling code. Do you still have the data that shows how many packets were dropped during this period. Do you still have the experimental data? I am particulary interested in seeing the softnet stats as well as tcp netstats. No, These tests were not run by me, I'll probably rerun similar tests as well to base my work on, send me in private how do I get the stats from the kernel and I'll add it to my test scripts. I think your main problem was the huge amounts of SACK on the writequeue and the resultant processing i.e section 1.1 and how you resolved that. That is my main guess as well, the original work was done rather quickly, we are now reorganizing thoughts and redoing the tests in a more orderly fashion. I dont see any issue in dropping ACKs, many of them even for such large windows as you have - TCPs ACKs are cummulative. It is true if you drop "large" enough amounts of ACKS, you will end up in timeouts - but large enough in your case must be in the minimal 1000 packets. And to say you dropped a 1000 packets while processing 300 means you were taking too long processing the 300. With the current code SACK processing takes a long time, so it is possible that it happened to drop more than a thousand packets while handling 300. I think that after the fixing of the SACK code, the rest might work without getting to much into the ingress queue. But that might still change when we go to even higher speeds. Then what would be really interesting is to see the perfomance you get from multiple flows with and without congestion.
I am not against a the benchmarky nature of the single flow and tuning for that, but we should also look at a wider scope at the effect before you handwave based on the result of one testcase. I can't say I didn't handwave, but then, there is little experimentation done to see if the other claims are correct and that AFQ is really needed so early in the packet receive stage. There are also voices that say AFQ sucks and causes more damage than good, I don't remember details currently. So if i was you i would repeat 1.2 with the fix from 1.1 as well as tying the NIC to one CPU. And it would be a good idea to present more detailed results - not just tcp windows fluctuating (you may not need them for the paper, but would be useful to see for debugging purposes other parameters). I'd be happy to hear what other benchmarks you would like to see, I currently intend to add some ack processing time analysis and oprofile information. With possibly showing the size of the ingress queue as a measure as well. Making it as thorough as possible is one of my goals. Input is always welcome. Baruch |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: netif_rx packet dumping, Baruch Even |
|---|---|
| Next by Date: | [PATCH] [NET]: Fix deletion of equal local IPv4 addresses only varying in prefix length, Thomas Graf |
| Previous by Thread: | Re: netif_rx packet dumping, jamal |
| Next by Thread: | Re: netif_rx packet dumping, jamal |
| Indexes: | [Date] [Thread] [Top] [All Lists] |