xfs
[Top] [All Lists]

Re: poor io performance with xfs+raid5

To: Mike Eldridge <diz@xxxxxxxxx>
Subject: Re: poor io performance with xfs+raid5
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Fri, 26 Apr 2002 09:04:18 +0200
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20020425154135.B14120@xxxxxxxxxxxxxxxx>
References: <1019765945.12905.102.camel@xxxxxxxxxxxxxxxxxxxx> <Pine.BSO.4.44.0204251216360.25324-100000@xxxxxxxxxxxxxxxxxxxxxxxxxx> <3CC85999.501431E5@xxxxxxxxxxxxxxxx> <20020425144025.N16048@xxxxxxxxxxxxxxxx> <1019765945.12905.102.camel@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
At 15:41 25-4-2002 -0500, Mike Eldridge wrote:
On Thu, Apr 25, 2002 at 03:19:05PM -0500, Steve Lord wrote:
> Well, Duh! I should have seen that first time around, I get into the
> habit of reading my email too fast!
>
> We may be able to fix some things, if we can remake the filesystem.
> First you need to know the stripe unit of your raid - we can feed
> this into XFS to make it do stripe aligned allocations. This has
> to be done by hand on linux. Take a look at the mkfs.xfs man page
> and the section on sunit and swidth options. Probably bump your log size
> up from the default somewhat, not sure how it ended up as 1839
> that is scary.

RAID5 on this card offers only a 64K stripe size.  however, i will be
recreating the array as RAID1 or RAID10, which offers stripe sizes from
64K to 1MB.  i'm not sure which is the best way to go.  i think that the
best thing to do, considering additonal space requirements might be
neccessary, is to go with multiple RAID1 arrays and let LVM do the
striping.  any caveats here?

That should work just fine.

this particular box is a mail server and it handles a lot of i/o with
pretty small files (< 64K).  i want to optimize for performance.
unfortunately, this is also my *first* foray into xfs/lvm/raid, so i
want to make sure i have as much information as possible before i carve
it all in stone.

Use raid 1 since mail servers are a lot like databases, lots of reads and writes. Writes are always slow on raid5 and don't use it for a database or mailserver.

make the filesystem with a 32MB log.
mkfs.xfs -l size=32768b /dev/sda

Mount the filesystem with more buffers (this also means that you can loose more data while it is in transit) although mail servers will probably write the file synchronously and those buffers could be useless. There is a chance that this also the problem you are encountering with the raid5 config.
mount -t xfs -o logbufs=8 /dev/sda /var/spool/mail

Cheers

--
Seth
It might just be your lucky day, if you only knew.


<Prev in Thread] Current Thread [Next in Thread>