netdev
[Top] [All Lists]

Re: More measurements

To: Andrew Morton <andrewm@xxxxxxxxxx>
Subject: Re: More measurements
From: Andi Kleen <ak@xxxxxx>
Date: Tue, 30 Jan 2001 12:48:40 +0100
Cc: netdev@xxxxxxxxxxx
In-reply-to: <3A75785A.42B9E7CE@xxxxxxxxxx>; from Andrew Morton on Tue, Jan 30, 2001 at 10:08:02AM +0100
References: <3A75785A.42B9E7CE@xxxxxxxxxx>
Sender: owner-netdev@xxxxxxxxxxx
On Tue, Jan 30, 2001 at 10:08:02AM +0100, Andrew Morton wrote:
> Lots of interesting things here.
> 
> - eepro100 generates more interrupts doing TCP Tx, but not
>   TCP Rx.  I assume it doesn't do Tx mitigation?

The Intel driver (e100.c) uploads special firmware and does it for RX and TX.
eepro100 doesn't. Perhaps you could measure that driver too?
It unfortunately doesn't support zc.

> 
> - Changing eepro100 to use IO operations instead of MMIO slows
>   down this dual 500MHz machine by less than one percent at
>   100 mbps.  At 12,000 interrupts per second. Why all the fuss
>   about MMIO?

iirc Ingo at some point found at some monster machine that the IO operations
in the eepro100 interrupt handler dominated some Tux profile.

> 
> - Bonding the 905's interrupt to CPU0 slows things down slightly.
>   (This is contrary to other measurements I've previously taken.
>    Don't pay any attention to this).

;)

> 
> - Without the zc patch, there is a significant increase (25%) in
>   the number of Rx packets (acks, persumably) when data is sent
>   using sendfile() as opposed to when the same data is sent
>   with send().

RX on the sender?
> 
>   Workload: 62 files, average size 350k.
>             sendfile() tries to send the entire file in one hit
>             send() breaks it up into 64kbyte chunks.
> 
>   When the zerocopy patch is applied, the Rx packet rate during
>   sendfile() is the same as the rate during send().
> 
>   Why is this?

Does the send() variant use TCP_CORK ? 

> - I see a consistent 12-13% slowdown on send() with the zerocopy
>   patch.  Can this be fixed?

Ugh. 


-Andi

<Prev in Thread] Current Thread [Next in Thread>