| To: | "David S. Miller" <davem@xxxxxxxxxx> |
|---|---|
| Subject: | Re: [patch] e1000 TSO parameter |
| From: | Anton Blanchard <anton@xxxxxxxxx> |
| Date: | Tue, 29 Jul 2003 16:53:07 +1000 |
| Cc: | davidm@xxxxxxxxxx, davidm@xxxxxxxxxxxxxxxxx, scott.feldman@xxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx |
| In-reply-to: | <20030714223822.23b78f9b.davem@xxxxxxxxxx> |
| References: | <C6F5CF431189FA4CBAEC9E7DD5441E0102229169@xxxxxxxxxxxxxxxxxxxxxx> <20030714214510.17e02a9f.davem@xxxxxxxxxx> <16147.37268.946613.965075@xxxxxxxxxxxxxxxxx> <20030714223822.23b78f9b.davem@xxxxxxxxxx> |
| Sender: | netdev-bounce@xxxxxxxxxxx |
| User-agent: | Mutt/1.5.4i |
Hi, > > So we get almost 15% of throughput drop. This was with plain "netkit > > fptd". AFAIK, it does a simple read/write loop (not sendfile()). We've been seeing rather variable results for TSO as well. With TSO off netperf TCP_STREAM will hit line speed and stay there. With TSO on some runs will hit line speed and others will be about 100Mbit/sec slower. > When we use TSO for non-sendfile() applications it really > stresses memory allocations. We do these 64K+ kmalloc()'s > for each packet we construct. Yep we definitely noticed much more higher allocations when watching /proc/slab. Playing around with slab tuning didnt seem to help. Anton |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | [PATCH] fix NAPI race, Anton Blanchard |
|---|---|
| Next by Date: | Re: kernel BUG at kernel/timer.c:380!, Andrew Morton |
| Previous by Thread: | Re: [patch] e1000 TSO parameter, David S. Miller |
| Next by Thread: | RE: [patch] e1000 TSO parameter, Feldman, Scott |
| Indexes: | [Date] [Thread] [Top] [All Lists] |