xfs
[Top] [All Lists]

Re: Proper SU/SW values for nested raids RAID50/RAID60

To: "Mark Noon" <mnoon@xxxxxxx>
Subject: Re: Proper SU/SW values for nested raids RAID50/RAID60
From: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Thu, 9 Jul 2015 15:29:24 +0200
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <trinity-35b666d3-ff9b-4764-9962-bf376c678994-1436381694787@3capp-mailcom-bs09>
Organization: Intellique
References: <trinity-35b666d3-ff9b-4764-9962-bf376c678994-1436381694787@3capp-mailcom-bs09>
Le Wed, 8 Jul 2015 20:54:54 +0200
"Mark Noon" <mnoon@xxxxxxx> Ãcrivait:

> I'm trying to understand the proper values when formatting an xfs
> filesystem under a hardware raid controller and seem to get
> conflicting information between whats on the wiki and various mailing
> listings and forum posts when searching google. I have a system that
> will be storing images and other static content files accross a
> RAID50. The plan is to do 3 raid5 arrays each with 5 disks with a
> stripe size of 128K striped accross a raid0. I will then build a
> logical volume on top of this. When running mkfs.xfs should the
> correct settings be su=128k sw=12 (-1 parity 'disk' for each of the 3
> arrays) or su=128k sw=4 (only consider one of the arrays)? Also if I
> ever add a JBOD and extend the storage how does this change things
> esspecially given that it will be on a completely different raid
> volume just merged together via LVM? Thanks for the assitance! 

You should probably go with the 128K su/12 sw value. It probably won't
make much difference.
When you'll extend your FS with LVM, your attempt at optimisation will
be completely nullified anyway :)

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |   <eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>