I "never" see that because I always bind a NIC to a specific CPU :)
Just about every networking-intensive benchmark report I've seen has
done the same.
Just a reminder that the networking-benchmark world and
the real networking deployment world have a less than desirable
intersection (which I know you know only too well, Rick ;)).
Touche :)
How often do people use affinity? How often do they really tune
the system for their workloads?
Not as often as they should.
> How often do they turn off things like SACK etc?
Well, I'm in an email discussion with someone who seems to bump their TCP
windows quite large, and disable timestamps...
Not very often in the real world. Designing OSs to
do better at benchmarks is a different proposition than designing
OSs to do well in the real world.
BTW what is the real world purpose of having the multiple CPU affinity of NIC
interrupts? I have to admit it seems rather alien to me. (In the context of no
onboard NIC smarts being involved that is)
Note Linux is quiet resilient to reordering compared to other OSes (as
you may know) but avoiding this is a better approach - hence my
suggestion to use NAPI when you want to do serious TCP.
The real killer for TCP is triggering fast retransmit
unnecessarily
Agreed. That is doubleplusungood.
rick
|