Robert.Olsson@xxxxxxxxxxx said:
> 10 Million pkts injected at high speed into eth2 and forwarded to eth3. Rx
> and
> Tx buffers are 256 and HW_FLOW is disabled and RxIntDelay=1. Which is same
> parameters as we use for production systems. As seen link now flaps.
> Eventually can hw_flowcontrol and interrupt delays help this... but thats not
> an option at least not for us.
>
>
> Twist: New Old
> ====================================
> Input rate: 680 (due to link drop) 820 kpps
> T-put: 309 385 kpps
> RX irq's: 78963 434
I've seen pretty much the same thing. I plotted throughput vs. offered
load for e1000 4.4.12-k1, 4.4.19-k3, and 5.0.43-k1 (all backported to
2.4.20). A summary with graphs is at:
http://gtf.org/lunz/linux/net/perf/
5.0.43 seems to be a significant regression, both in terms of throughput
and CPU load.
--
Jason Lunz Reflex Security
lunz@xxxxxxxxxxxxxxxxxx http://www.reflexsecurity.com/
|