xfs
[Top] [All Lists]

Re: mkfs options for a 16x hw raid5 and xfs (mostly large files)

To: linux-xfs@xxxxxxxxxxx
Subject: Re: mkfs options for a 16x hw raid5 and xfs (mostly large files)
From: Ralf Gross <Ralf-Lists@xxxxxxxxxxxx>
Date: Tue, 25 Sep 2007 15:49:56 +0200
In-reply-to: <20070925125733.GA20873@xxxxxxxxxxxx>
References: <20070923093841.GH19983@xxxxxxxxxxxxxxxxxxxxxxxxx> <20070924173155.GI19983@xxxxxxxxxxxxxxxxxxxxxxxxx> <Pine.LNX.4.64.0709241400370.12025@xxxxxxxxxxxxxxxx> <20070924203958.GA4082@xxxxxxxxxxxxxxxxxxxxxxxxx> <Pine.LNX.4.64.0709241642110.19847@xxxxxxxxxxxxxxxx> <20070924213358.GB4082@xxxxxxxxxxxxxxxxxxxxxxxxx> <Pine.LNX.4.64.0709241736370.19847@xxxxxxxxxxxxxxxx> <20070924215223.GC4082@xxxxxxxxxxxxxxxxxxxxxxxxx> <20070925123501.GA20499@xxxxxxxxxxxxxxxxxxxxxxxxx> <20070925125733.GA20873@xxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.9i
KELEMEN Peter schrieb:
> * Ralf Gross (ralf-lists@xxxxxxxxxxxx) [20070925 14:35]:
> 
> > There is a second RAID device attached to the server (24x
> > RAID5). The numbers I get from this device are a bit worse than
> > the 16x RAID 5 numbers (150MB/s read with dd).
> 
> You are expecting 24 spindles to align up when you have a write
> request, which has to be 23*chunksize bytes in order to avoid RMW.
> Additionally, your array is so big that you're very likely to hit
> another error while rebuilding.  Chop up your monster RAID5 array
> into smaller arrays and stripe across them.  Even better, consider
> RAID10.

RAID10 is no option, we need 60+ TB at the moment, mostly large video
files. Basically the read/write performance we get with the 16x RAID 5
is sufficient for our needs. The 24x RAID 5 is only a test device. The
volumes that will be used in the future are the 16/15x RAIDs (48 disk
shelf with 3 volumes).

I'm just wondering how people get 400+ MB/s with HW-RAID 5.

Ralf


<Prev in Thread] Current Thread [Next in Thread>