netdev
[Top] [All Lists]

Re: Queue and SMP locking discussion (was Re: 3c59x.c)

To: Michael Richardson <mcr@xxxxxxxxxxx>
Subject: Re: Queue and SMP locking discussion (was Re: 3c59x.c)
From: jamal <hadi@xxxxxxxxxx>
Date: Sat, 1 Apr 2000 10:28:25 -0500 (EST)
Cc: netdev@xxxxxxxxxxx
In-reply-to: <200003311908.OAA00894@xxxxxxxxxxx>
Sender: owner-netdev@xxxxxxxxxxx

On Fri, 31 Mar 2000, Michael Richardson wrote:

>       http://www.research.solidum.com/papers/ols1999/top.html
> 

Michael,

I think we had this debate during your presentation ;-> Here are my
thoughts:

Bus Latency is not a problem as far as throughput is concerned. This
problem can be equated to *exactly* the high RTT-BW problem in TCP. You
just have to adjust your ring-buffering accordingly. I dont think
processing latency is an issue either; even with your broken pcnet
driver[1] you come up with a number of 4007 cycles to process a
packet. Get yourself a faster processor ;-> 
So your assertion that "the 33Mhz, 32 bit PCI bus itself can
theoretically handle up to one and a half million (1428571 to
be exact) frames per second, or 50 10 Mb/s adaptors" is misleading.
I realize you say it is theoretical; however, ask people who use Alexey's
fast forwarding driver and they'll tell you they definetly do more than
50Mbps.
 
BTW, current 2.3 kernels allow you to use APICs even on a single
processor.

cheers,
jamal

[1] A modified tulip driver at 100Mbps FD which does all the rx processing 
(record stats etc) but drops the packet instead of passing the packet up
the stack easily handles 150Kpps. I have only tested with one
interface. The stats are derived by simply using ifconfig and comparing
with the hardware generator -- nothing fancy. I should retry it blasting
at two NICS and see whether they can both handle it. This was a while back
using a hardware traffic generator (very precise interpacket times of 0.96
microsecs) with some 2.2 kernel. 


<Prev in Thread] Current Thread [Next in Thread>