netdev
[Top] [All Lists]

Re: Is sendfile all that sexy?

To: <kuznet@xxxxxxxxxxxxx>
Subject: Re: Is sendfile all that sexy?
From: jamal <hadi@xxxxxxxxxx>
Date: Tue, 16 Jan 2001 07:55:11 -0500 (EST)
Cc: <netdev@xxxxxxxxxxx>
In-reply-to: <200101151817.VAA12009@xxxxxxxxxxxxx>
Sender: owner-netdev@xxxxxxxxxxx

Sorry to disappoint you Alexey, the numbers havent changed ;-<
It would be helpful if someone else with two machines, SMP and
two gigE cards can try this out; i'll send my setup to anybody -- email
me.

Setup:
=====
** ttcp is the traffic source/sink.
- the receiver is MSG_TRUNCing

** Sender: SMP-PII-450Mhz, ASUS m/board; 3com version of acenic
- 1M version
** receiver: same hardware; acenic alteon card - 1M version


Results are:
============
- SF means sendfile on sender with the usual goodies (TCP_CORK etc)
- NSF means the usual write to a socket from user space
- ZC means Zero copy patches include in kernel


Kernel     |  tput  | sender-CPU | receiver-CPU |
-------------------------------------------------
2.4.0-pre3 | 99MB/s |   87%      |  23%         |
NSF        |        |            |              |
-------------------------------------------------
2.4.0-pre3 | 86MB/s |   100%     |  17%         |
SF         |        |            |              |
-------------------------------------------------
2.4.0-pre3 | 66.2   |   60%      |  11%         |
+ZC        | MB/s   |            |              |
-------------------------------------------------
2.4.0-pre3 | 68     |   8%       |  8%          |
+ZC  SF    | MB/s   |            |              |
-------------------------------------------------

Observations?
=============

CPU down, throughput down with ZC. I dont understand why the
ZC managed to bring down CPU also on the regular writes to socket.
Perhaps something else in that general patch.
For something like a web server which opens gazilions of
connections, this is fantastic news; for an ftp server, a single flow
might not be able to fill the pipe.

cheers,
jamal


<Prev in Thread] Current Thread [Next in Thread>