netdev
[Top] [All Lists]

Re: TX performance of Intel 82546

To: P@xxxxxxxxxxxxxx
Subject: Re: TX performance of Intel 82546
From: Robert Olsson <Robert.Olsson@xxxxxxxxxxx>
Date: Wed, 15 Sep 2004 14:36:25 +0200
Cc: Harald Welte <laforge@xxxxxxxxxxxxx>, Linux NICS <linux.nics@xxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <414808F3.70104@xxxxxxxxxxxxxx>
References: <20040915081439.GA27038@xxxxxxxxxxxxxxxxxxxxxxx> <414808F3.70104@xxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
P@xxxxxxxxxxxxxx writes:
 > Harald Welte wrote:

 > > I'm currently trying to help Robert Olsson improving the performance of
 > > the Linux in-kernel packet generator (pktgen.c).  At the moment, we seem
 > > to be unable to get more than 760kpps from a single port of a 82546,
 > > (or any other PCI-X MAC supported by e1000) - that's a bit more than 51%
 > > wirespeed at 64byte packet sizes.

 Yes it seems intel adapters work better in BSD as they claim to route
 1 Mpps and we cannot even send more ~750 kpps even with feeding the
 adapter only. :-)

 > In my experience anything around 750Kpps is a PCI limitation,
 > specifically PCI bus arbitration latency. Note the clock speed of
 > the control signal used for bus arbitration has not increased
 > in proportion to the PCI data clock speed.

 Yes data from an Opteron @ 1.6 GHz w. e1000 82546EB 64 byte pkts.

 133 MHz 830 pps
 100 MHz 721 pps
  66 MHz 561 pps

 So higher bus bandwidth could increase the small packet rate.

 So is there a difference in PCI-tuning BSD versus Linux? 
 And even more general can we measure the maximum numbers
 of transactions on a PCI-bus?

 Chip should be able to transfer 64 packets in single burst I don't now
 how set/verify this.

 Cheers.
                                                --ro

<Prev in Thread] Current Thread [Next in Thread>