[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xfs vs. jfs results: why?



On Sun, 2003-01-19 at 21:20, LA Walsh wrote:
> CONCLUSION:
> We are not able to reproduce the excellent numbers described at:
> http://home.fnal.gov/~yocum/storageServerTechnicalNote.html
> It appears that the best performance for read-orientated and mixed 
> workloads is obtained with JFS, for write-orientated XFS.
> ---
> >From page: http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm
> 
> Basic thrust was JFS anywhere from 12-28% faster on reads, XFS
> up to 33% faster on writes.  
> 
> Writes are good, but the aren't what I do the most of (though wasn't
> xfs designed with tuning for dmedia recording in real-time as a 
> priority?).  XFS seemed to, _very_ slightly outperform reads on Reiser
> though Reiser's worse times were sometimes better than our worst.
> Reiser performed about 20% better than XFS for writes when 3ware native RAID5 was enabled.
> 
> So XFS not clearly an across the board champ in reads or writes though
> overall, as article claims, XFS as 2nd.  
> 
> So...wazzup?  Isn't JFS newer?  Hasn't XFS been around longer and had
> benefit of years of tuning?  Is it just the linux integration that
> has slowed things down?  Maybe I've just read too many of our own 
> marketing docs, but I thought XFS was close to stellar among FS's...?
> 
> Is this testing bogus?  A fluke?  Or should I start getting a grip
> with reality and becoming disillusioned (illusions of marketing tripe
> instilled in head being laid to rest...?)
> 
> BTW -- hope this goes through, ok...my email has been weird since
> around midnight this morning.  Getting some spatterings of outside
> email, but no list email from any of my list subscriptions.  Very
> weird since they are from disparate list servers.  Other connectivity
> seems unchanged...
> 
> If you respond to this, please 'cc' me too so I can hopefully get a
> copy...I'm not sure what would be blocking listmails...
> 
> tnx,
> -l
> 

One thing the paper did not talk about at all was any parameters
used in the build configuration or the mkfs/mount options used.

The correct mkfs options on xfs can make extents line up on
stripe boundaries which helps I/O.

It also does not state the size of the test files in relationship
to memory size - which makes the difference between reading/writing
from cache and actually using the disks.

Since they talk about being almost cpu bound this would seem to
indicate that anything in the code which could be turned off should
be turned off to improve performance. In the case of xfs there are
three things which will influence the performance, posix acls,
quotas and dmapi. Turning each of these off reduces the code
path. I am also not sure which code base was used for XFS, they
say redhat 2.4.19 - which does not include XFS.

So all in all, my conclusion is there is too little information
here to deduce what might be possible with the hardware.

Steve