[Top] [All Lists]

Re: [E1000-devel] Transmission limit

To: Harald Welte <laforge@xxxxxxxxxxxx>
Subject: Re: [E1000-devel] Transmission limit
From: Cesar Marcondes <cesar@xxxxxxxxxxx>
Date: Sat, 27 Nov 2004 12:12:29 -0800 (PST)
Cc: Marco Mellia <mellia@xxxxxxxxxxxxxxxxxxxx>, P@xxxxxxxxxxxxxx, e1000-devel@xxxxxxxxxxxxxxxxxxxxx, Jorge Manuel Finochietto <jorge.finochietto@xxxxxxxxx>, Giulio Galante <galante@xxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <20041127092503.GA12592@xxxxxxxxxxxxxxxxxxxxxxx>
References: <1101467291.24742.70.camel@xxxxxxxxxxxxxxxxxxxxxx> <41A73826.3000109@xxxxxxxxxxxxxx> <1101483081.24742.174.camel@xxxxxxxxxxxxxxxxxxxxxx> <20041127092503.GA12592@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
STOP !!!!

On Sat, 27 Nov 2004, Harald Welte wrote:

> On Fri, Nov 26, 2004 at 04:31:21PM +0100, Marco Mellia wrote:
> > If you don't trust us, please, ignore this email.
> > Sorry.
> >
> > That's the number we have. And are actually very similar from what other
> > colleagues of us got.
> >
> > The point is:
> > while a PCI-X linux or (or click) box can receive (receive just up to
> > the netif_receive_skb() level and then discard the skb) up to more than
> > wire speed using off-the-shelf gigabit ethernet hardware, there is no
> > way to transmit more than about half that speed. This is true
> > considering minimum sized ethernet frames.
> Yes, I've seen this, too.
> I even rewrote the linux e1000 driver in order to re-fill the tx queue
> from hardirq handler, and it didn't help.  760kpps is the most I could
> ever get (133MHz 64bit PCI-X on a Sun Fire v20z, Dual Opteron 1.8GHz)
> I've posted this result to netdev at some earlier point, I also Cc'ed
> intel but never got a reply
> (
> My guess is that Intel always knew this and they want to sell their CSA
> chips rather than improving the PCI e1000.
> We are hitting a hard limit here, either PCI-X wise or e1000 wise.  You
> cannot refill the tx queue faster than from hardirq, and still you don't
> get any better numbers.
> It was suggested that the problem is PCI DMA arbitration latency, since
> the hardware needs to arbitrate the bus for every packet.
> Interestingly, if you use a four-port e1000, the numbers get even worse
> (580kpps) because the additional pcix bridge on the card introduces
> further latency.
> --
> - Harald Welte <laforge@xxxxxxxxxxxx>     
> ============================================================================
> Programming is like sex: One mistake and you have to support it your lifetime

<Prev in Thread] Current Thread [Next in Thread>