| To: | xfs@xxxxxxxxxxx |
|---|---|
| Subject: | Proper SU/SW values for nested raids RAID50/RAID60 |
| From: | "Mark Noon" <mnoon@xxxxxxx> |
| Date: | Wed, 8 Jul 2015 20:54:54 +0200 |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Importance: | normal |
| Sensitivity: | Normal |
|
I'm trying to understand the proper values when formatting an xfs filesystem under a hardware raid controller and seem to get conflicting information between whats on the wiki and various mailing listings and forum posts when searching google. I have a system that will be storing images and other static content files accross a RAID50. The plan is to do 3 raid5 arrays each with 5 disks with a stripe size of 128K striped accross a raid0. I will then build a logical volume on top of this.
When running mkfs.xfs should the correct settings be su=128k sw=12 (-1 parity 'disk' for each of the 3 arrays) or su=128k sw=4 (only consider one of the arrays)?
Also if I ever add a JBOD and extend the storage how does this change things esspecially given that it will be on a completely different raid volume just merged together via LVM?
Thanks for the assitance!
|
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH] xfs: close xc_cil list_empty() races with cil commit sequence, Brian Foster |
|---|---|
| Next by Date: | Re: xfs_io bmap confused, Ming Lin |
| Previous by Thread: | [PATCH] xfs: Use consistent logging message prefixes, Joe Perches |
| Next by Thread: | Re: Proper SU/SW values for nested raids RAID50/RAID60, Emmanuel Florac |
| Indexes: | [Date] [Thread] [Top] [All Lists] |