| To: | Denis Vlasenko <vda@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>, "David S. Miller" <davem@xxxxxxxxxx> |
|---|---|
| Subject: | Re: [PATCH] eliminate large inline's in skbuff |
| From: | Stephen Hemminger <shemminger@xxxxxxxx> |
| Date: | Wed, 5 May 2004 14:59:49 -0700 |
| Cc: | netdev@xxxxxxxxxxx |
| In-reply-to: | <200405020037.47712.vda@port.imtp.ilyichevsk.odessa.ua> |
| Organization: | Open Source Development Lab |
| References: | <200404212226.28350.vda@port.imtp.ilyichevsk.odessa.ua> <Xine.LNX.4.44.0404212046490.20483-100000@thoron.boston.redhat.com> <20040427142136.35b521d5@dell_ss3.pdx.osdl.net> <200405020037.47712.vda@port.imtp.ilyichevsk.odessa.ua> |
| Sender: | netdev-bounce@xxxxxxxxxxx |
> On Wednesday 28 April 2004 00:21, Stephen Hemminger wrote: > > This takes the suggestion and makes all the locked skb_ stuff, not inline. > > It showed a 3% performance improvement when doing single TCP stream over 1G > > Ethernet between SMP machines. Test was average of 10 iterations of > > iperf for 30 seconds and SUT was 4 way Xeon. Http performance difference > > was not noticeable (less than the std. deviation of the test runs). The original tests were suspect for a whole lot of reasons. Running the proper tests shows no performance differences. The best theory as to why there was a difference in earlier tests is that memory debugging was enabled; that caused each buffer to be overwritten with a memset. When that happened, the test ends up measuring the speed of the memory and cache bandwidth, not the CPU or the network. |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Linux 2.4.22 loopback question, Cheng Jin |
|---|---|
| Next by Date: | Virus Scan service detected a virus in a message you sent., AV Administrator |
| Previous by Thread: | Linux 2.4.22 loopback question, Cheng Jin |
| Next by Thread: | Re: [PATCH] eliminate large inline's in skbuff, Denis Vlasenko |
| Indexes: | [Date] [Thread] [Top] [All Lists] |