netdev
[Top] [All Lists]

Re: TX performance of Intel 82546

To: Robert Olsson <Robert.Olsson@xxxxxxxxxxx>
Subject: Re: TX performance of Intel 82546
From: jamal <hadi@xxxxxxxxxx>
Date: 15 Sep 2004 09:49:51 -0400
Cc: P@xxxxxxxxxxxxxx, Harald Welte <laforge@xxxxxxxxxxxxx>, Linux NICS <linux.nics@xxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <16712.14153.683690.710955@robur.slu.se>
Organization: jamalopolous
References: <20040915081439.GA27038@sunbeam.de.gnumonks.org> <414808F3.70104@draigBrady.com> <16712.14153.683690.710955@robur.slu.se>
Reply-to: hadi@xxxxxxxxxx
Sender: netdev-bounce@xxxxxxxxxxx
On Wed, 2004-09-15 at 08:36, Robert Olsson wrote:

>  > In my experience anything around 750Kpps is a PCI limitation,
>  > specifically PCI bus arbitration latency. Note the clock speed of
>  > the control signal used for bus arbitration has not increased
>  > in proportion to the PCI data clock speed.
> 
>  Yes data from an Opteron @ 1.6 GHz w. e1000 82546EB 64 byte pkts.
> 
>  133 MHz 830 pps
>  100 MHz 721 pps
>   66 MHz 561 pps
> 
>  So higher bus bandwidth could increase the small packet rate.

Nice data.
BTW, is this per interface? i thought i have seen numbers in the range
of 1.3Mpps from you.

>  So is there a difference in PCI-tuning BSD versus Linux? 

As far as i could tell they batch transmit (mostly because of the way
mbufs are structured really). 

>  And even more general can we measure the maximum numbers
>  of transactions on a PCI-bus?

You would need speacilized hardware for this i think.

>  Chip should be able to transfer 64 packets in single burst I don't now
>  how set/verify this.

What Pádraig.posted in regards to the MMRBC register is actually
enlightening. I kept thinking about it after i sent my last email.
If indeed the overhead is incured in the setup(all fingers in my test
setups point fingers at this) then increasing the burst size should show
improvements.

cheers,
jamal



<Prev in Thread] Current Thread [Next in Thread>