[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xfs vs. jfs results: why?



On Mon, Jan 20, 2003 at 10:35:36AM +0100, Seth Mos wrote:
> At 19:20 19-1-2003 -0800, LA Walsh wrote:
> >CONCLUSION:
> >We are not able to reproduce the excellent numbers described at:
> >http://home.fnal.gov/~yocum/storageServerTechnicalNote.html
> >It appears that the best performance for read-orientated and mixed
> >workloads is obtained with JFS, for write-orientated XFS.
> >---
> >From page: http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm
> 
> Since you are using 2.4.19. Did you compile this kernel with HIGMEM and 
> HIGHMEM IO?
> 
> There have been some performance problems in 2.4.19 if you did not have 
> HIGHMEM IO turned on.
> 
> Can you check this please?

If it was an highio bouncing problem then all tested file systems
because they always used the same drivers.

Wild theory:

Older 2.4 XFS (1.1) used a different read path that was more oriented towards 
efficient use of extents and big IO. Later in 1.2 this was rewritten to use the 
generic_file_read function in linux, which allocates much more buffer_heads 
and is likely a bit more CPU intensive. They may be hit by that.

Linux 2.5 has a new block device layer which should support big 
IO requests and extents as XFS does them much better.

-Andi