On Tue, 9 Oct 2001, Santiago Garcia Mantinan wrote:
[snip]
> > I guess i am confused a little about your results.
> > Also what would help is to describe your packet sizes, send and
> > receive rates etc.
>
> The packages are 4 bytes udp packages they are sent at the full ratio at
> which a P200MMX running 2.2.19 and a P166 running 2.4.10 can, I haven't
> calculated any ratios, but If you want I can try to, one could guess that
> from the number of interrupts that 2.2.19 was showing, I suppose. If we
> assume that then it would be 38500 packages per second, but If I calculate
> that from the time of the test being run and the number of received packages
> shown by ifconfig I get 42500 wich seems a more exact number to me.
I ran this test here.
The recieving end was a 2 x pIII 600 server with a D-Link DFE570-TX NIC
(quad tulip) running 2.4.8-ac12 + tulip-ss010402-polling driver.
The sending machine was my workstation here, a pIII 700 with eepro100 NIC
running 2.4.9-ac5. This machine is attached to eth1 on the server via a
crossover cable.
The reciever has a few iptables modules loaded, ip_conntrack was one of
them. (the sender doesn't have any iptables modules loaded)
The thing that was limiting the packets per second recieved by the server
was the rate my workstation could send them out at.
When I left my workstation alone (stopped typing on the keyboard, didn't
move the mouse and stopped the mp3's) the server recieved ~100k pps.
This was no real problem for the server, it didn't drop any packets or get
any overruns, it just happily chewed along.
This was an SMP machine but I tested to bind the recieving interface to
CPU0 by using smp_affinity and that didn't make any diffrence in the
performance of the machine. According to vmstat one of the CPU's was
operating at about 90-100% load (giving about 50-55% idle in the machine).
As I couldn't get it to start dropping packets with this load of 100k pps
I increased the load a little.
I attached two more machines, on eth0 and eth3. But I didn't run udpspam
on them, they just ran 'nc serverip port < /dev/zero' and the server ran
two 'nc -l -p port > /dev/null'
This test also involves some pipe activity so it's not a good networktest.
But both the new senders managed to send at rate between 95 and 98Mbit to
the server, this is in fullsize 1500byte packets.
When the server started recieving these two extra streams it did get a
little busy and dropping packets on eth1. (I had resetted smp_affinity to
ffffffff before starting this test.)
Then I put eth1 (the one with the high pps) on CPU0 and eth[03] on CPU1.
then everybody was happy. According to vmstat there was about 30% cpu idle
in total in the server.
This last test probably doesn't help at all but the point is that my
SMP pIII 600 had no real problems recieving 100k pps even when binding the
NIC to one cpu. And it can probably do even better when it's routing the
packets instead of terminating the stream (my other tests have showed this
to be true).
The tulip-ss010402-polling is as the name says a polling driver. a hybrid
that runs with interrupts at low packetrates and switches to polling when
there's higher loads. I probably don't have to say this to Jamal or Robert
as they are the ones who implemented this stuff :)
/Martin
Never argue with an idiot. They drag you down to their level, then beat you
with experience.
|