<div dir="ltr">Hi Stan,<div><br></div><div>Apologies for not directly answering - </div><div>I was aiming at filling gaps in my knowledge that I could not find in the <a href="http://xfs.org">xfs.org</a> wiki.</div><div><br>
</div><div>My workload for the storage is mainly reads of single large files (ranging for 20GB to 100GB each)</div><div>These reads are mainly linear (video playback, although not always as the end user may be jumping to different points in the video)</div>
<div>There are concurrent reads required, estimated at 2 to 8, any more would be a bonus.</div><div>The challenge of this would be that the reads need to be "real-time" operations as they are interacted with by a person, and each</div>
<div>read operation would have to consistently have a low latency and obtain speeds of over 50Mb/s</div><div><br></div><div>Disk write speeds are not <i>as</i> important for me - as they these files are copied to location before they are required (in this case</div>
<div>using rsync or scp) and these operations do not require as much "real-time" interaction.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 27 September 2013 14:09, Stan Hoeppner <span dir="ltr"><<a href="mailto:stan@hardwarefreak.com" target="_blank">stan@hardwarefreak.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 9/27/2013 7:23 AM, Stewart Webb wrote:<br>
>> Right, and it does so not only to improve write performance, but to<br>
>> also maximise sequential read performance of the data that is<br>
>> written, especially when multiple files are being read<br>
>> simultaneously and IO latency is important to keep low (e.g.<br>
>> realtime video ingest and playout).<br>
><br>
> So does this mean that I should avoid having devices in RAID with a<br>
> differing amount of spindles (or non-parity disks)<br>
> If I would like to use Linear concatenation LVM? Or is there a best<br>
> practice if this instance is not<br>
> avoidable?<br>
<br>
</div>Above, Dave was correcting my oversight, not necessarily informing you,<br>
per se. It seems clear from your follow up question that you didn't<br>
really grasp what he was saying. Let's back up a little bit.<br>
<br>
What you need to concentrate on right now is the following which we<br>
stated previously in the thread, but which you did not reply to:<br>
<div class="im"><br>
>>>> What really makes a difference as to whether alignment will be of<br>
>>>> benefit to you, and how often, is your workload. So at this point, you<br>
>>>> need to describe the primary workload(s) of your systems we're<br>
>> discussing.<br>
>>><br>
>>> Yup, my thoughts exactly...<br>
<br>
</div>This means you need to describe in detail how you are writing your<br>
files, and how you are reading them back. I.e. what application are you<br>
using, what does it do, etc. You stated IIRC that your workload is 80%<br>
read. What types of files is it reading? Small, large? Is it reading<br>
multiple files in parallel? How are these files originally written<br>
before being read? Etc, etc.<br>
<br>
You may not understand why this is relevant, but it is the only thing<br>
that is relevant, at this point. Spindles, RAID level, alignment, no<br>
alignment...none of this matters if it doesn't match up with how your<br>
application(s) do their IO.<br>
<br>
Rule #1 of storage architecture: Always build your storage stack (i.e.<br>
disks, controller, driver, filesystem, etc) to fit the workload(s), not<br>
the other way around.<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> On 27 September 2013 02:10, Stan Hoeppner <<a href="mailto:stan@hardwarefreak.com">stan@hardwarefreak.com</a>> wrote:<br>
><br>
>> On 9/26/2013 4:58 PM, Dave Chinner wrote:<br>
>>> On Thu, Sep 26, 2013 at 04:22:30AM -0500, Stan Hoeppner wrote:<br>
>>>> On 9/26/2013 3:55 AM, Stewart Webb wrote:<br>
>>>>> Thanks for all this info Stan and Dave,<br>
>>>>><br>
>>>>>> "Stripe size" is a synonym of XFS sw, which is su * #disks. This is<br>
>> the<br>
>>>>>> amount of data written across the full RAID stripe (excluding parity).<br>
>>>>><br>
>>>>> The reason I stated Stripe size is because in this instance, I have<br>
>> 3ware<br>
>>>>> RAID controllers, which refer to<br>
>>>>> this value as "Stripe" in their tw_cli software (god bless<br>
>> manufacturers<br>
>>>>> renaming everything)<br>
>>>>><br>
>>>>> I do, however, have a follow-on question:<br>
>>>>> On other systems, I have similar hardware:<br>
>>>>> 3x Raid Controllers<br>
>>>>> 1 of them has 10 disks as RAID 6 that I would like to add to a logical<br>
>>>>> volume<br>
>>>>> 2 of them have 12 disks as a RAID 6 that I would like to add to the<br>
>> same<br>
>>>>> logical volume<br>
>>>>><br>
>>>>> All have the same "Stripe" or "Strip Size" of 512 KB<br>
>>>>><br>
>>>>> So if I where going to make 3 seperate xfs volumes, I would do the<br>
>>>>> following:<br>
>>>>> mkfs.xfs -d su=512k sw=8 /dev/sda<br>
>>>>> mkfs.xfs -d su=512k sw=10 /dev/sdb<br>
>>>>> mkfs.xfs -d su=512k sw=10 /dev/sdc<br>
>>>>><br>
>>>>> I assume, If I where going to bring them all into 1 logical volume, it<br>
>>>>> would be best placed to have the sw value set<br>
>>>>> to a value that is divisible by both 8 and 10 - in this case 2?<br>
>>>><br>
>>>> No. In this case you do NOT stripe align XFS to the storage, because<br>
>>>> it's impossible--the RAID stripes are dissimilar. In this case you use<br>
>>>> the default 4KB write out, as if this is a single disk drive.<br>
>>>><br>
>>>> As Dave stated, if you format a concatenated device with XFS and you<br>
>>>> desire to align XFS, then all constituent arrays must have the same<br>
>>>> geometry.<br>
>>>><br>
>>>> Two things to be aware of here:<br>
>>>><br>
>>>> 1. With a decent hardware write caching RAID controller, having XFS<br>
>>>> alined to the RAID geometry is a small optimization WRT overall write<br>
>>>> performance, because the controller is going to be doing the optimizing<br>
>>>> of final writeback to the drives.<br>
>>>><br>
>>>> 2. Alignment does not affect read performance.<br>
>>><br>
>>> Ah, but it does...<br>
>>><br>
>>>> 3. XFS only performs aligned writes during allocation.<br>
>>><br>
>>> Right, and it does so not only to improve write performance, but to<br>
>>> also maximise sequential read performance of the data that is<br>
>>> written, especially when multiple files are being read<br>
>>> simultaneously and IO latency is important to keep low (e.g.<br>
>>> realtime video ingest and playout).<br>
>><br>
>> Absolutely correct, as Dave always is. As my workloads are mostly<br>
>> random, as are those of others I consult in other fora, I sometimes<br>
>> forget the [multi]streaming case. Which is not good, as many folks<br>
>> choose XFS specifically for [multi]streaming workloads. My remarks to<br>
>> this audience should always reflect that. Apologies for my oversight on<br>
>> this occasion.<br>
>><br>
>>>> What really makes a difference as to whether alignment will be of<br>
>>>> benefit to you, and how often, is your workload. So at this point, you<br>
>>>> need to describe the primary workload(s) of your systems we're<br>
>> discussing.<br>
>>><br>
>>> Yup, my thoughts exactly...<br>
>>><br>
>>> Cheers,<br>
>>><br>
>>> Dave.<br>
>>><br>
>><br>
>> --<br>
>> Stan<br>
>><br>
>><br>
><br>
><br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div></div>Stewart Webb<br></div>
</div>