xfs
[Top] [All Lists]

Re: xfs vs. jfs results: why?

To: Seth Mos <knuffie@xxxxxxxxx>, LA Walsh <law@xxxxxxxxx>, linux-xfs@xxxxxxxxxxx
Subject: Re: xfs vs. jfs results: why?
From: Andi Kleen <ak@xxxxxxx>
Date: Mon, 20 Jan 2003 11:07:17 +0100
In-reply-to: <4.3.2.7.2.20030120103529.042f3d50@pop.xs4all.nl>
References: <000001c2c032$e4edc240$1403a8c0@sc.tlinx.org> <4.3.2.7.2.20030120103529.042f3d50@pop.xs4all.nl>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4i
On Mon, Jan 20, 2003 at 10:35:36AM +0100, Seth Mos wrote:
> At 19:20 19-1-2003 -0800, LA Walsh wrote:
> >CONCLUSION:
> >We are not able to reproduce the excellent numbers described at:
> >http://home.fnal.gov/~yocum/storageServerTechnicalNote.html
> >It appears that the best performance for read-orientated and mixed
> >workloads is obtained with JFS, for write-orientated XFS.
> >---
> >From page: http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm
> 
> Since you are using 2.4.19. Did you compile this kernel with HIGMEM and 
> HIGHMEM IO?
> 
> There have been some performance problems in 2.4.19 if you did not have 
> HIGHMEM IO turned on.
> 
> Can you check this please?

If it was an highio bouncing problem then all tested file systems
because they always used the same drivers.

Wild theory:

Older 2.4 XFS (1.1) used a different read path that was more oriented towards 
efficient use of extents and big IO. Later in 1.2 this was rewritten to use the 
generic_file_read function in linux, which allocates much more buffer_heads 
and is likely a bit more CPU intensive. They may be hit by that.

Linux 2.5 has a new block device layer which should support big 
IO requests and extents as XFS does them much better.

-Andi


<Prev in Thread] Current Thread [Next in Thread>