xfs hardware RAID alignment over linear lvm

Stewart Webb stew at messeduphare.co.uk
Thu Sep 26 04:28:00 CDT 2013


Understood,

My workload is primarily reads (about 80%+ read operations) - so defaults
will most likely
be best suited on this occasion.

I was simply trying to follow the guidelines on the XFS wiki to
the best of my ability, and felt I didn't understand the impact of using
this via LVM.

Now I feel I understand enough to continue in what I need to do.

Thanks again


On 26 September 2013 10:22, Stan Hoeppner <stan at hardwarefreak.com> wrote:

> On 9/26/2013 3:55 AM, Stewart Webb wrote:
> > Thanks for all this info Stan and Dave,
> >
> >> "Stripe size" is a synonym of XFS sw, which is su * #disks.  This is the
> >> amount of data written across the full RAID stripe (excluding parity).
> >
> > The reason I stated Stripe size is because in this instance, I have 3ware
> > RAID controllers, which refer to
> > this value as "Stripe" in their tw_cli software (god bless manufacturers
> > renaming everything)
> >
> > I do, however, have a follow-on question:
> > On other systems, I have similar hardware:
> > 3x Raid Controllers
> > 1 of them has 10 disks as RAID 6 that I would like to add to a logical
> > volume
> > 2 of them have 12 disks as a RAID 6 that I would like to add to the same
> > logical volume
> >
> > All have the same "Stripe" or "Strip Size" of 512 KB
> >
> > So if I where going to make 3 seperate xfs volumes, I would do the
> > following:
> > mkfs.xfs -d su=512k sw=8 /dev/sda
> > mkfs.xfs -d su=512k sw=10 /dev/sdb
> > mkfs.xfs -d su=512k sw=10 /dev/sdc
> >
> > I assume, If I where going to bring them all into 1 logical volume, it
> > would be best placed to have the sw value set
> > to a value that is divisible by both 8 and 10 - in this case 2?
>
> No.  In this case you do NOT stripe align XFS to the storage, because
> it's impossible--the RAID stripes are dissimilar.  In this case you use
> the default 4KB write out, as if this is a single disk drive.
>
> As Dave stated, if you format a concatenated device with XFS and you
> desire to align XFS, then all constituent arrays must have the same
> geometry.
>
> Two things to be aware of here:
>
> 1.  With a decent hardware write caching RAID controller, having XFS
> alined to the RAID geometry is a small optimization WRT overall write
> performance, because the controller is going to be doing the optimizing
> of final writeback to the drives.
>
> 2. Alignment does not affect read performance.
>
> 3.  XFS only performs aligned writes during allocation.  I.e. this only
> occurs when creating a new file, new inode, etc.  For append and
> modify-in-place operations, there is no write alignment.  So again,
> stripe alignment to the hardware geometry is merely an optimization, and
> only affect some types of writes.
>
> What really makes a difference as to whether alignment will be of
> benefit to you, and how often, is your workload.  So at this point, you
> need to describe the primary workload(s) of your systems we're discussing.
>
> --
> Stan
>
>


-- 
Stewart Webb
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20130926/7aa7aa90/attachment.html>


More information about the xfs mailing list