On Fri, Nov 26, 2004 at 09:56:59PM +0100, Lennert Buytenhek wrote:
> On an e1000 in a 32b 66MHz PCI slot (Intel server mainboard, e1000 'desktop'
> NIC) I'm seeing that exact curve for packet sizes > ~350 bytes, but for
> smaller packets than that, the curve goes like p=264000000/(s+335) (which
> is accurate to +/- 100pps.) The 2.64e8 component is exactly the theoretical
> max. bandwidth of the PCI slot the card is in, the 335 a random constant
> that accounts for latency. On a different mobo I get a curve following
> the same formula but different value for 335.
> The same card in a 32b 33MHz PCI slot in a cheap Asus desktop board gives
> something a bit stranger:
> - p=132000000/(s+260) for s<128
> - p=132000000/(s+390) for 128<=s<256
> - p=132000000/(s+520) for 256<=s<384
> - ...
This could be explained by observing that on the Intel mobo, the NIC sits
on a dedicated PCI bus, while on the cheap Asus board, all PCI slots plus
all onboard devices share the same PCI bus. Probably after pulling in a
single burst of packet (32 clocks here, sounds about right), the NIC has
to relinquish the bus to other bus masters and wait for 128 byte times
until it gets to pull packet data from RAM again.
Would be interesting to find out where the latency is coming from. Find
a way to reduce/work around that and the 64b packet case will benefit as