xfs
[Top] [All Lists]

NFS 0 sizes bug, high nfsd load

To: linux-xfs@xxxxxxxxxxx
Subject: NFS 0 sizes bug, high nfsd load
From: Robin Humble <rjh@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Date: Mon, 26 Feb 2001 10:03:23 +1100 (EDT)
Reply-to: rjh_at_pixel.maths.monash.edu.au@xxxxxxxxxxxxxxxxxxxxxxxxxxx
Sender: owner-linux-xfs@xxxxxxxxxxx
Hi,

I did some testing with XFS as we're hoping to build an ATA software
RAID0 storage box which'll be accessed over NFSv3. Has anyone had
success with XFS over software RAID0?

Anyway, I encountered a few problems which I thought y'all might like
to know about.

I compiled up 2.4.2+XFS from friday's cvs and exported an XFS
partition from there using NFSv3 to a box running PreRelease0.9 ...
On the client, 'du' and 'ls -s' both reported 0 sizes for everything.
'ls -l' worked ok, as did 'df'.
Exporting from a pre0.9 box to a pre0.9 box works just fine so this
is a new bug.

>From a pre0.9 to pre0.9 box I also did
  dd if=/dev/zero of=bigFile bs=1M count=1000
onto an NFSv3 mounted XFS partition. After the memory cache filled up
on the NFS server, the kernel nfsd's started using a LOT of cpu
system time - around the 30-50% mark. The network link was slow
(~2Mbit) between the client and server, so I didn't expect this.
The same write over NFSv3 to an ext2 partition used only about 2-5%
cpu load. I was using the default 8 nfsd's.

I also ran bonnie++ (www.coker.com.au/bonnie++/) on the 2.4.2-XFS cvs
XFS partition (without any NFS this time), and then ran it again on
the same partition with ext2 and ReiserFS.  XFS had the best
bandwidth (up to 30% more than the others when kio was used), but its
delete performance was pretty bad - worse even than ext2. Random
delete was kinda amazingly slow.

                ------Sequential Create------ --------Random Create--------
                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
          files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 ext2        16   365  99 +++++ +++  9735  98   366  99 +++++ +++  3119  99
 xfs         16  1473  39 +++++ +++  1360  31  1486  39 +++++ +++   261   7
 reiser      16  8868 100 +++++ +++ 11914  99  8957 100 +++++ +++ 10379 100

this is with the default 16k files/dir, and the +++'s means it was
too fast to measure (a good thing! :-)

hope this helps... let me know if I can give you more info. The full
bonnie++ info is at http://www.cita.utoronto.ca/~rjh/fs.bench.txt, but
as always YMMV. It's a datpaoint anyway.

cheers,
robin
--
    Robin Humble       http://www.cita.utoronto.ca/~rjh/

<Prev in Thread] Current Thread [Next in Thread>