<div dir="ltr">Thanks for all this info Stan and Dave,<div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:13px">> "Stripe size" is a synonym of XFS sw, which is su * #disks. This is the</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">> amount of data written across the full RAID stripe (excluding parity).</span><br></div><div><br></div><div>The reason I stated Stripe size is because in this instance, I have 3ware RAID controllers, which refer to</div>
<div>this value as "Stripe" in their tw_cli software (god bless manufacturers renaming everything)</div><div><br></div><div>I do, however, have a follow-on question:</div><div>On other systems, I have similar hardware:</div>
<div>3x Raid Controllers</div><div>1 of them has 10 disks as RAID 6 that I would like to add to a logical volume</div><div>2 of them have 12 disks as a RAID 6 that I would like to add to the same logical volume</div><div>
<br></div><div>All have the same "Stripe" or "Strip Size" of 512 KB</div><div><br></div><div>So if I where going to make 3 seperate xfs volumes, I would do the following:</div><div>mkfs.xfs -d su=512k sw=8 /dev/sda</div>
<div>mkfs.xfs -d su=512k sw=10 /dev/sdb</div><div>mkfs.xfs -d su=512k sw=10 /dev/sdc</div><div><br></div><div>I assume, If I where going to bring them all into 1 logical volume, it would be best placed to have the sw value set</div>
<div>to a value that is divisible by both 8 and 10 - in this case 2?</div><div><br></div><div>Obviously, this is not an ideal situation, and I will most likely modify the hardware to better suite.</div><div>But I'd really like to fully understand this.</div>
<div><br></div><div>Thanks for any insight you are able to give</div><div><br></div><div>Regards</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 25 September 2013 22:57, Dave Chinner <span dir="ltr"><<a href="mailto:david@fromorbit.com" target="_blank">david@fromorbit.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Wed, Sep 25, 2013 at 03:34:01PM -0600, Chris Murphy wrote:<br>
><br>
> On Sep 25, 2013, at 3:18 PM, Stan Hoeppner <<a href="mailto:stan@hardwarefreak.com">stan@hardwarefreak.com</a>> wrote:<br>
><br>
> > On 9/25/2013 7:56 AM, Stewart Webb wrote:<br>
> >> Hi All,<br>
> ><br>
> > Hi Stewart,<br>
> ><br>
> >> I am trying to do the following:<br>
> >> 3 x Hardware RAID Cards each with a raid 6 volume of 12 disks presented to<br>
> >> the OS<br>
> >> all raid units have a "stripe size" of 512 KB<br>
> ><br>
> > Just for future reference so you're using correct terminology, a value<br>
> > of 512KB is surely your XFS su value, also called a "strip" in LSI<br>
> > terminology, or a "chunk" in Linux software md/RAID terminology. This<br>
> > is the amount of data written to each data spindle (excluding parity) in<br>
> > the array.<br>
> ><br>
> > "Stripe size" is a synonym of XFS sw, which is su * #disks. This is the<br>
> > amount of data written across the full RAID stripe (excluding parity).<br>
> ><br>
> >> so given the info on the <a href="http://xfs.org" target="_blank">xfs.org</a> wiki - I sould give each filesystem a<br>
> >> sunit of 512 KB and a swidth of 10 (because RAID 6 has 2 parity disks)<br>
> ><br>
> > Partially correct. If you format each /dev/[device] presented by the<br>
> > RAID controller with an XFS filesystem, 3 filesystems total, then your<br>
> > values above are correct. EXCEPT you must use the su/sw parameters in<br>
> > mkfs.xfs if using BYTE values. See mkfs.xfs(8)<br>
> ><br>
> >> all well and good<br>
> >><br>
> >> But - I would like to use Linear LVM to bring all 3 cards into 1 logical<br>
> >> volume -<br>
> >> here is where my question crops up:<br>
> >> Does this effect how I need to align the filesystem?<br>
> ><br>
> > In the case of a concatenation, which is what LVM linear is, you should<br>
> > use an XFS alignment identical to that for a single array as above.<br>
</div></div> ^^^^^^<br>
<div class="im">> So keeping the example, 3 arrays x 10 data disks, would this be su=512k and sw=30?<br>
<br>
</div>No, the alignment should match that of a *single* 10 disk array,<br>
so su=512k,sw=10.<br>
<br>
Linear concatentation looks like this:<br>
<br>
offset volume array<br>
0 +-D1-+-D2-+.....+-Dn-+ 0 # first sw<br>
.....<br>
X-sw +-D1-+-D2-+.....+-Dn-+ 0<br>
X +-E1-+-E2-+.....+-En-+ 1 # first sw<br>
.....<br>
2X-sw +-E1-+-E2-+.....+-En-+ 1<br>
2X +-F1-+-F2-+.....+-Fn-+ 2 # first sw<br>
.....<br>
3X-sw +-F1-+-F2-+.....+-Fn-+ 2<br>
<br>
Where:<br>
D1...Dn are the disks in the first array<br>
E1...En are the disks in the second array<br>
F1...Fn are the disks in the third array<br>
X is the size of the each array<br>
sw = su * number of data disks in array<br>
<br>
As you can see, all the volumes are arranged in a single column -<br>
identical to a larger single array of the same size. Hence the<br>
exposed alignment of a single array is what the filesystem should be<br>
aligned to, as that is how the linear concat behaves.<br>
<br>
You also might note here that if you want the second and subsequent<br>
arrays to be correctly aligned to the initial array in the linear<br>
concat (and you do want that), the arrays must be sized to be an<br>
exact multiple of the stripe width.<br>
<br>
Cheers,<br>
<br>
Dave.<br>
<span class="HOEnZb"><font color="#888888">--<br>
Dave Chinner<br>
<a href="mailto:david@fromorbit.com">david@fromorbit.com</a><br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
xfs mailing list<br>
<a href="mailto:xfs@oss.sgi.com">xfs@oss.sgi.com</a><br>
<a href="http://oss.sgi.com/mailman/listinfo/xfs" target="_blank">http://oss.sgi.com/mailman/listinfo/xfs</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div></div>Stewart Webb<br></div>
</div>