xfs
[Top] [All Lists]

Re: optimizing raid performance with xfs

To: Andy Arvai <arvai@xxxxxxxxxxx>
Subject: Re: optimizing raid performance with xfs
From: Steve Lord <lord@xxxxxxx>
Date: 19 Mar 2003 14:39:26 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <Pine.LNX.4.44.0303191521360.2065-100000@xxxxxxxxxxxxxxxxxx>
Organization:
References: <Pine.LNX.4.44.0303191521360.2065-100000@xxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Wed, 2003-03-19 at 14:30, Joshua Baker-LePain wrote:
> On Wed, 19 Mar 2003 at 11:48am, Andy Arvai wrote
> 
> > In the next few weeks I will be building a linux server with a large
> > (1.2TB) raid array. There will be two 3ware 7850 cards (running
> > hardware raid5) and a software raid0 across these two cards.  I plan to
> > benchmark three different filesystems (ext3, reiser and xfs) to
> > determine which performs the best. The main thing I am interested in is
> > sequential i/o with large files and I've heard that xfs should be the
> > best choice for this. I am wondering if anyone has recommendations for
> > mkfs.xfs or mount options to maximize performance in this situation.
> 
> During recent testing on a single 3ware card, I found XFS to have twice 
> the write speed of ext3, with a similar read speed (this is all with 
> bonnie++ and the default mkfs options (except for log size)).  I didn't 
> test Reiser.  If the server has lots of memory, make sure that your kernel 
> supports HIGH I/O and that you're using 3ware drivers that support it -- 
> it makes a *big* difference.  I'm using the RH based XFS 1.2 release 
> kernel and the 7.5.3 3ware driver set.
> 
> I've got another 3ware based system that's similar to yours (two cards 
> with a software stripe), and I found that increasing the chunk-size of the 
> stripe increased performance (at the cost of CPU load) -- I'm using 4096k 
> in production.  mkfs.xfs will automatically tune swidth and sunit for the 
> software stripe.
> 
> On the hardware side, make sure your cards are on separate PCI busses.  
> You'll be bus limited otherwise.

One thing which came up recently was how inodes get placed once you
cross the 1 Tbyte boundary. Policy changes to avoid 33 bit inode
numbers. You can avoid this policy change with

        mkfs -t xfs -f -i size=512 /dev/xxx

Also, pay attention to the mkfs output, the sunit and swidth lines,
these control how data will get layed out. You want to make sure they
line up with your device configuration. This may or may not happen
automatically depending on your setup. Read the mkfs.xfs man page for
how to control them yourself.

Steve

-- 

Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx


<Prev in Thread] Current Thread [Next in Thread>