netdev
[Top] [All Lists]

zerocopy results on GigE

To: netdev@xxxxxxxxxxx
Subject: zerocopy results on GigE
From: Pekka Pietikainen <pp@xxxxxxxxxxxxxx>
Date: Tue, 6 Feb 2001 19:49:19 +0200
Sender: owner-netdev@xxxxxxxxxxx
Here's some benchmarks I ran today, which look quite similar to what
Jamal was getting. 

Jumbo frames, sender is dual pIII/500 with 32/66 PCI, receiver a dual pII/450
with 32/33, both have 1MB Alteons. CPU use is measured using cyclesoak.
SO_RCVBUF/SO_SNDBUF set to 512k, no other sockopts touched.

I did the tests with a nice little modular network tester I've been
hacking on, ttcp/gensink give similar results... 

zerocopy-2.4.1-2,
writes were done in 512k chunks (except sendfile() where the file was
transmitted all at once)

Test                            bandwith        CPU(rcv)        CPU(transmit)

writing from buffer             65MB/s          45%             30%
same with MSG_TRUNC on receiver 81MB/s          21%             40%

64MB file:

read/write from file            48MB/s          34%             53%
read/write from file+MSG_TRUNC  48MB/s          14%             53%
sendfile()                      62MB/s          45%             8%
sendfile()+MSG_TRUNC            80MB/s          21%             14%
mmap()/write                    64MB/s          45%             35%
mmap()/write+MSG_TRUNC          81MB/s          21%             52%

128MB file: (128M memory so causes some paging which caused cpu use
and performance to bounce around)

read/write from file            43MB/s          30%             55%
+MSG_TRUNC                      44MB/s          12%             56%
sendfile()                      62MB/s          45%             33% (+-5%)
+MSG_TRUNC                      81MB/s          21%             47%
mmap()/write                    40MB/s          27% (+-7%)      80%
+MSG_TRUNC                      45MB/s          12%             80%

2.4.2-pre1

writing from buffer             70MB/s          54%             33%
same with MSG_TRUNC on receiver 98MB/s          19%             46%

64M file

read/write from file            50MB/s          33%             50%
+MSG_TRUNC                      51MB/s          16%             53%
sendfile()                      68MB/s          48%             41%
+MSG_TRUNC                      93MB/s          36%             56%
mmap()/write                    57MB/s          39%             32%
+MSG_TRUNC                      87MB/s          28%             53%
        
128M file

read/write from file            44MB/s          31%             55%
sendfile()                      64MB/s          47%             60%
sendfile+MSG_TRUNC              64MB/s          23%             60%
mmap()/write                    33MB/s          26%             70%

And for comparison:

STP     
                                98MB/s          2.8%            17%

Might be that the problems are in fact caused by the non-zc related
"optimizations" in the acenic driver, I'll try playing with it a bit more
tomorrow.

-- 
Pekka Pietikainen

<Prev in Thread] Current Thread [Next in Thread>