xfs
[Top] [All Lists]

Re: xfs hardware RAID alignment over linear lvm

To: Stewart Webb <stew@xxxxxxxxxxxxxxxxxx>
Subject: Re: xfs hardware RAID alignment over linear lvm
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 25 Sep 2013 16:18:31 -0500
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CAE3v2EaODFud_S_BzuSjtwGwuNBXhvL0RiPB1P5QroF45Obwbw@xxxxxxxxxxxxxx>
References: <CAE3v2EaODFud_S_BzuSjtwGwuNBXhvL0RiPB1P5QroF45Obwbw@xxxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130801 Thunderbird/17.0.8
On 9/25/2013 7:56 AM, Stewart Webb wrote:
> Hi All,

Hi Stewart,

> I am trying to do the following:
> 3 x Hardware RAID Cards each with a raid 6 volume of 12 disks presented to
> the OS
> all raid units have a "stripe size" of 512 KB

Just for future reference so you're using correct terminology, a value
of 512KB is surely your XFS su value, also called a "strip" in LSI
terminology, or a "chunk" in Linux software md/RAID terminology.  This
is the amount of data written to each data spindle (excluding parity) in
the array.

"Stripe size" is a synonym of XFS sw, which is su * #disks.  This is the
amount of data written across the full RAID stripe (excluding parity).

> so given the info on the xfs.org wiki - I sould give each filesystem a
> sunit of 512 KB and a swidth of 10 (because RAID 6 has 2 parity disks)

Partially correct.  If you format each /dev/[device] presented by the
RAID controller with an XFS filesystem, 3 filesystems total, then your
values above are correct.  EXCEPT you must use the su/sw parameters in
mkfs.xfs if using BYTE values.  See mkfs.xfs(8)

> all well and good
> 
> But - I would like to use Linear LVM to bring all 3 cards into 1 logical
> volume -
> here is where my question crops up:
> Does this effect how I need to align the filesystem?

In the case of a concatenation, which is what LVM linear is, you should
use an XFS alignment identical to that for a single array as above.

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>