[Top] [All Lists]

Re: TX performance of Intel 82546

To: P@xxxxxxxxxxxxxx
Subject: Re: TX performance of Intel 82546
From: Harald Welte <laforge@xxxxxxxxxxxxx>
Date: Wed, 15 Sep 2004 20:15:16 +0200
Cc: Robert Olsson <Robert.Olsson@xxxxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <41484AC2.8090408@xxxxxxxxxxxxxx>
References: <20040915081439.GA27038@xxxxxxxxxxxxxxxxxxxxxxx> <414808F3.70104@xxxxxxxxxxxxxx> <16712.14153.683690.710955@xxxxxxxxxxxx> <41484AC2.8090408@xxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.6+20040818i
On Wed, Sep 15, 2004 at 02:59:30PM +0100, P@xxxxxxxxxxxxxx wrote:
> Interesting info thanks!
> It would be very interesting to see the performance of PCI express
> which should not have the bus arbitration issues.

Unfortunately there is no e1000 for PCI Express available yet... only
Marvell-Yukon and syskonnect single-port boards so far :(

> Well from the intel docs they say "The devices include a PCI interface
> that maximizes the use of bursts for efficient bus usage.
> The controllers are able to cache up to 64 packet descriptors in
> a single burst for efficient PCI bandwidth usage."
> So I'm guessing that increasing the PCI-X burst size setting
> (MMRBC) will automatically get more packets sent per transfer?
> I said previously in this thread to google for setpci and MMRBC,
> but what I know about it is...

Mh, I tried it on my System, following parameters:

dual 82456GB, PCI-X, 64bit, 66MHz, UP x86_64 kernel, modified e1000 with
hard-wired tx descriptor refill.

I did not observe any change in tx pps throughput when setting MMRBC to
512 / 4096 byte bursts.

- Harald Welte <laforge@xxxxxxxxxxxxx>   
  "Fragmentation is like classful addressing -- an interesting early
   architectural error that shows how much experimentation was going
   on while IP was being designed."                    -- Paul Vixie

Attachment: signature.asc
Description: Digital signature

<Prev in Thread] Current Thread [Next in Thread>