<div dir="ltr">><span style="font-family:arial,sans-serif;font-size:13px">Right, and it does so not only to improve write performance, but to</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">>also maximise sequential read performance of the data that is</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">>written, especially when multiple files are being read</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px">>simultaneously and IO latency is important to keep low (e.g.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">>realtime video ingest and playout).</span><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:13px">So does this mean that I should avoid having devices in RAID with a differing amount of spindles (or non-parity disks)</span></div>
<div><span style="font-size:13px;font-family:arial,sans-serif">If I would like to use </span><span style="font-size:13px;font-family:arial,sans-serif">Linear </span><font face="arial, sans-serif">concatenation </font><span style="font-family:arial,sans-serif;font-size:13px">LVM? Or is there a best practice if this instance is not</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px">avoidable?</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:13px">Regards</span></div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On 27 September 2013 02:10, Stan Hoeppner <span dir="ltr"><<a href="mailto:stan@hardwarefreak.com" target="_blank">stan@hardwarefreak.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 9/26/2013 4:58 PM, Dave Chinner wrote:<br>
> On Thu, Sep 26, 2013 at 04:22:30AM -0500, Stan Hoeppner wrote:<br>
>> On 9/26/2013 3:55 AM, Stewart Webb wrote:<br>
>>> Thanks for all this info Stan and Dave,<br>
>>><br>
>>>> "Stripe size" is a synonym of XFS sw, which is su * #disks. This is the<br>
>>>> amount of data written across the full RAID stripe (excluding parity).<br>
>>><br>
>>> The reason I stated Stripe size is because in this instance, I have 3ware<br>
>>> RAID controllers, which refer to<br>
>>> this value as "Stripe" in their tw_cli software (god bless manufacturers<br>
>>> renaming everything)<br>
>>><br>
>>> I do, however, have a follow-on question:<br>
>>> On other systems, I have similar hardware:<br>
>>> 3x Raid Controllers<br>
>>> 1 of them has 10 disks as RAID 6 that I would like to add to a logical<br>
>>> volume<br>
>>> 2 of them have 12 disks as a RAID 6 that I would like to add to the same<br>
>>> logical volume<br>
>>><br>
>>> All have the same "Stripe" or "Strip Size" of 512 KB<br>
>>><br>
>>> So if I where going to make 3 seperate xfs volumes, I would do the<br>
>>> following:<br>
>>> mkfs.xfs -d su=512k sw=8 /dev/sda<br>
>>> mkfs.xfs -d su=512k sw=10 /dev/sdb<br>
>>> mkfs.xfs -d su=512k sw=10 /dev/sdc<br>
>>><br>
>>> I assume, If I where going to bring them all into 1 logical volume, it<br>
>>> would be best placed to have the sw value set<br>
>>> to a value that is divisible by both 8 and 10 - in this case 2?<br>
>><br>
>> No. In this case you do NOT stripe align XFS to the storage, because<br>
>> it's impossible--the RAID stripes are dissimilar. In this case you use<br>
>> the default 4KB write out, as if this is a single disk drive.<br>
>><br>
>> As Dave stated, if you format a concatenated device with XFS and you<br>
>> desire to align XFS, then all constituent arrays must have the same<br>
>> geometry.<br>
>><br>
>> Two things to be aware of here:<br>
>><br>
>> 1. With a decent hardware write caching RAID controller, having XFS<br>
>> alined to the RAID geometry is a small optimization WRT overall write<br>
>> performance, because the controller is going to be doing the optimizing<br>
>> of final writeback to the drives.<br>
>><br>
>> 2. Alignment does not affect read performance.<br>
><br>
> Ah, but it does...<br>
><br>
>> 3. XFS only performs aligned writes during allocation.<br>
><br>
> Right, and it does so not only to improve write performance, but to<br>
> also maximise sequential read performance of the data that is<br>
> written, especially when multiple files are being read<br>
> simultaneously and IO latency is important to keep low (e.g.<br>
> realtime video ingest and playout).<br>
<br>
</div></div>Absolutely correct, as Dave always is. As my workloads are mostly<br>
random, as are those of others I consult in other fora, I sometimes<br>
forget the [multi]streaming case. Which is not good, as many folks<br>
choose XFS specifically for [multi]streaming workloads. My remarks to<br>
this audience should always reflect that. Apologies for my oversight on<br>
this occasion.<br>
<div class="HOEnZb"><div class="h5"><br>
>> What really makes a difference as to whether alignment will be of<br>
>> benefit to you, and how often, is your workload. So at this point, you<br>
>> need to describe the primary workload(s) of your systems we're discussing.<br>
><br>
> Yup, my thoughts exactly...<br>
><br>
> Cheers,<br>
><br>
> Dave.<br>
><br>
<br>
--<br>
</div></div><span class="HOEnZb"><font color="#888888">Stan<br>
<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div></div>Stewart Webb<br></div>
</div>