>>>>> "ANK" == kuznet <kuznet@xxxxxxxxxxxxx> writes:
ANK> Jes wrote:
>> One thing that might be worth investigating is that the AceNIC has
>> a high latency for reading buffer descriptors. One of the plans I
>> have is to linearize small skb's before handing them to the NIC.
ANK> Small skbs in these tests are ACKs, they are linear.
ANK> Also, even with host ring, all the fragment descriptors are read
ANK> in one DMA transaction. Or do you mean reading data chunks, not
ANK> descriptors?
I don't remember all the details, I just remembe Ted Schroeder (one of
the Alteon founders) recommending me to linearize small transfers as
loading buffer descriptors could cost up to 5us.
ANK> In any case, maximal latency is 5-7usec, which is not a big
ANK> number for TCP with jumbo mtu, where latency is dominated by bulk
ANK> dma. But, if my arithmetics is correct, this really puts
ANK> theoretical limit on transmission of 1500 byte frames: ~90MB/sec.
ANK> (BTW, Jes, you enabled tx host ring in the latest driver. Did
ANK> you notice that it increases latency by ~1 usec?)
The numbers for Jumbo MTU's are not all that exciting, what really
matters if how we perform on 1.5K packets. 95% of the switches on the
market don't do 9K packets hence very very few people use it ;-(
No I didn't notice the 1us extra latency, I made the change to reduce
the slow writes to PCI shared mem which are becoming even more
significant now with the increase in host memory speed and no increase
in PCI speed. If it becomes a real issue we can stick the non host
ring support back in.
Jes
|