[Top] [All Lists]

Re: xfs hardware RAID alignment over linear lvm

To: Stewart Webb <stew@xxxxxxxxxxxxxxxxxx>
Subject: Re: xfs hardware RAID alignment over linear lvm
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Thu, 26 Sep 2013 04:22:30 -0500
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, Chris Murphy <lists@xxxxxxxxxxxxxxxxx>, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CAE3v2EYVnXiWq1n8AJ0+Y2eifZyhV08S4uLwf6B6mXXWAzBzRA@xxxxxxxxxxxxxx>
References: <CAE3v2EaODFud_S_BzuSjtwGwuNBXhvL0RiPB1P5QroF45Obwbw@xxxxxxxxxxxxxx> <52435327.9080607@xxxxxxxxxxxxxxxxx> <2F959FD9-EF28-4495-9D0B-59B93D89C820@xxxxxxxxxxxxxxxxx> <20130925215713.GH26872@dastard> <CAE3v2EYVnXiWq1n8AJ0+Y2eifZyhV08S4uLwf6B6mXXWAzBzRA@xxxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130801 Thunderbird/17.0.8
On 9/26/2013 3:55 AM, Stewart Webb wrote:
> Thanks for all this info Stan and Dave,
>> "Stripe size" is a synonym of XFS sw, which is su * #disks.  This is the
>> amount of data written across the full RAID stripe (excluding parity).
> The reason I stated Stripe size is because in this instance, I have 3ware
> RAID controllers, which refer to
> this value as "Stripe" in their tw_cli software (god bless manufacturers
> renaming everything)
> I do, however, have a follow-on question:
> On other systems, I have similar hardware:
> 3x Raid Controllers
> 1 of them has 10 disks as RAID 6 that I would like to add to a logical
> volume
> 2 of them have 12 disks as a RAID 6 that I would like to add to the same
> logical volume
> All have the same "Stripe" or "Strip Size" of 512 KB
> So if I where going to make 3 seperate xfs volumes, I would do the
> following:
> mkfs.xfs -d su=512k sw=8 /dev/sda
> mkfs.xfs -d su=512k sw=10 /dev/sdb
> mkfs.xfs -d su=512k sw=10 /dev/sdc
> I assume, If I where going to bring them all into 1 logical volume, it
> would be best placed to have the sw value set
> to a value that is divisible by both 8 and 10 - in this case 2?

No.  In this case you do NOT stripe align XFS to the storage, because
it's impossible--the RAID stripes are dissimilar.  In this case you use
the default 4KB write out, as if this is a single disk drive.

As Dave stated, if you format a concatenated device with XFS and you
desire to align XFS, then all constituent arrays must have the same

Two things to be aware of here:

1.  With a decent hardware write caching RAID controller, having XFS
alined to the RAID geometry is a small optimization WRT overall write
performance, because the controller is going to be doing the optimizing
of final writeback to the drives.

2. Alignment does not affect read performance.

3.  XFS only performs aligned writes during allocation.  I.e. this only
occurs when creating a new file, new inode, etc.  For append and
modify-in-place operations, there is no write alignment.  So again,
stripe alignment to the hardware geometry is merely an optimization, and
only affect some types of writes.

What really makes a difference as to whether alignment will be of
benefit to you, and how often, is your workload.  So at this point, you
need to describe the primary workload(s) of your systems we're discussing.


<Prev in Thread] Current Thread [Next in Thread>