netdev
[Top] [All Lists]

Re: [E1000-devel] Transmission limit

To: Scott Feldman <sfeldma@xxxxxxxxx>
Subject: Re: [E1000-devel] Transmission limit
From: Lennert Buytenhek <buytenh@xxxxxxxxxxxxxx>
Date: Fri, 3 Dec 2004 21:57:06 +0100
Cc: jamal <hadi@xxxxxxxxxx>, Robert Olsson <Robert.Olsson@xxxxxxxxxxx>, P@xxxxxxxxxxxxxx, mellia@xxxxxxxxxxxxxxxxxxxx, e1000-devel@xxxxxxxxxxxxxxxxxxxxx, Jorge Manuel Finochietto <jorge.finochietto@xxxxxxxxx>, Giulio Galante <galante@xxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <1101863399.4663.54.camel@sfeldma-mobl.dsl-verizon.net>
References: <16807.20052.569125.686158@robur.slu.se> <1101484740.24742.213.camel@mellia.lipar.polito.it> <41A76085.7000105@draigBrady.com> <1101499285.1079.45.camel@jzny.localdomain> <16811.8052.678955.795327@robur.slu.se> <1101821501.1043.43.camel@jzny.localdomain> <20041130134600.GA31515@xi.wantstofly.org> <1101824754.1044.126.camel@jzny.localdomain> <20041201001107.GE4203@xi.wantstofly.org> <1101863399.4663.54.camel@sfeldma-mobl.dsl-verizon.net>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Tue, Nov 30, 2004 at 05:09:59PM -0800, Scott Feldman wrote:

> Hey, turns out, I know some e1000 tricks that might help get the kpps
> numbers up.  
> 
> My problem is I only have a P4 desktop system with a 82544 nic running
> at PCI 32/33Mhz, so I can't play with the big boys.  But, attached is a
> rework of the Tx path to eliminate 1) Tx interrupts, and 2) Tx
> descriptor write-backs.  For me, I see a nice jump in kpps, but I'd like
> others to try with their setups.  We should be able to get to wire speed
> with 60-byte packets.

Attached is a graph of my numbers with and without your patch for:
- An 82540 at PCI 32/33, idle 33MHz card on the same bus forcing it to 33MHz.
- An 82541 at PCI 32/66.
- An 82546 at PCI-X 64/100, NIC can do 133MHz but mobo only does 100MHz.

All 'phi' tests were done on my box phi, a dual 2.4GHz Xeon on an Intel
SE7505VB2 board (http://www.intel.com/design/servers/se7505vb2/).  I've
included Robert's 64/133 numbers ('sourcemage') on his dual 866MHz P3 for
comparison.  I didn't test all packet sizes up to 1500, just the first few
hundred bytes for each.

As before, the max # pps at 60B packets is strongly influenced by the per-
packet overhead (which seems to be reduced by your patch for my machine
quite a bit, also on 64/100, even though Robert sees no improvement on
64/133) while the slope of each curve appears to depend only on the speed
of the bus the NIC is in.  I.e. the 60B kpps number more-or-less determines
the shape of the rest of the graph in each case.

Bus speed is most likely also the reason why the 64/100 setup w/o your patch
starts off slower than the 64/66 with your patch, but then eventually beats
the 64/66 (around 140B packets) just before they both hit the GigE saturation
point.

There's no drop at 256B for the 64/100 setup like with the 32/* setups.
Perhaps the drop at 256B is because of the PCI latency timer being set
to 64 by default, and that causes the transfer on 32b to be broken up in
256-byte chunks?

I'm not able to saturate gigabit on 32/33 with 1500B packets, while Jamal
does.  Another thing to look into.

Also note that the 64/100 NIC has rather wobbly performance between 60B and
~160B bytes.  This 'square wave pattern' is there both with and without your
patch, perhaps something particular to the NIC.  Its period appears to be 16
bytes, dropping down where packet_size mod 16 = 0, and then jumping up again
a bit when packet_size mod 16 = 6.  Odd.


--L

Attachment: perf.png
Description: PNG image

<Prev in Thread] Current Thread [Next in Thread>