jamal writes:
> > Yes data from an Opteron @ 1.6 GHz w. e1000 82546EB 64 byte pkts.
> >
> > 133 MHz 830 pps
> > 100 MHz 721 pps
> > 66 MHz 561 pps
Well the pps should kpps but everybody seem to understand this.
> BTW, is this per interface? i thought i have seen numbers in the range
> of 1.3Mpps from you.
Yes 1.3 Mpps is aggregated forwarding performance from 2 *.1.6 GHz
Opterons. In a setup where CPU0 handles eth0->eth1 and CPU1 handles
eth2->eth3.
So due to the fact that a single NIC's TX does not keep up with packet
budget I had to use several "flows" to saturate the packet budget.
This is a little breakthrough as we for the first time see some
aggregated performance with packet forwarding and got something in
return for all multiprocessor efforts.
IMO this is much more important then the last percent of performance
of pps numbers.
But the aggregated performance is only seen with Opterons my conclusion
as we discussed is that memory/controller is local to the CPU and gives
lower latency and additional CPU adds memory controllers. Compare this
to where many CPU's share same controller/memory.
> What Pádraig.posted in regards to the MMRBC register is actually
> enlightening. I kept thinking about it after i sent my last email.
> If indeed the overhead is incured in the setup(all fingers in my test
> setups point fingers at this) then increasing the burst size should show
> improvements.
It's worth to test...
Cheers.
--ro
|