[Top] [All Lists]

Re: makefs alignment issue

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: makefs alignment issue
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Mon, 27 Oct 2014 18:04:05 -0500
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20141026234325.GB6880@dastard>
References: <544AB289.8010005@xxxxxxxxxxxxxxxxx> <544AB338.2050905@xxxxxxxxxxx> <544ACDC4.1070501@xxxxxxxxxxxxxxxxx> <544AD077.4080305@xxxxxxxxxxx> <544AD234.3060100@xxxxxxxxxxx> <544B1439.6060509@xxxxxxxxxxxxxxxxx> <544BC6FA.8090101@xxxxxxxxxxx> <544BDF55.9040804@xxxxxxxxxxxxxxxxx> <20141026234325.GB6880@dastard>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Icedove/24.7.0
On 10/26/2014 06:43 PM, Dave Chinner wrote:
> On Sat, Oct 25, 2014 at 12:35:17PM -0500, Stan Hoeppner wrote:
>> If the same interface is used for Linux logical block devices (md, dm,
>> lvm, etc) and hardware RAID, I have a hunch it may be better to
>> determine that, if possible, before doing anything with these values.
>> As you said previously, and I agree 100%, a lot of RAID vendors don't
>> export meaningful information here.  In this specific case, I think the
>> RAID engineers are exporting a value, 1 MB, that works best for their
>> cache management, or some other path in their firmware.  They're
>> concerned with host interface xfer into the controller, not the IOs on
>> the back end to the disks.  They don't see this as an end-to-end deal.
>> In fact, I'd guess most of these folks see their device as performing
>> magic, and it doesn't matter what comes in or goes out either end.
>> "We'll take care of it."
> Deja vu. This is an isochronous RAID array you are having trouble
> with, isn't it?

I don't believe so.  I'm pretty sure the parity rotates; i.e. standard

> FWIW, do your problems go away when you make you hardware LUN width
> a multiple of the cache segment size?

Hadn't tried it.  And I don't have the opportunity now as my contract
has ended.  However the problems we were having weren't related to
controller issues but excessive seeking.  I mentioned this in that
(rather lengthy) previous reply.

>> optimal_io_size.  I'm guessing this has different meaning for different
>> folks.  You say optimal_io_size is the same as RAID width.  Apply that
>> to this case:
>> hardware RAID 60 LUN, 4 arrays
>> 16+2 RAID6, 256 KB stripe unit, 4096 KB stripe width
>> 16 MB LUN stripe width
>> optimal_io_size = 16 MB
>> Is that an appropriate value for optimal_io_size even if this is the
>> RAID width?  I'm not saying it isn't.  I don't know.  I don't know what
>> other layers of the Linux and RAID firmware stacks are affected by this,
>> nor how they're affected.
> yup, i'd expect minimum = 4MB (i.e stripe unit 4MB so we align to
> the underlying RAID6 luns) and optimal = 16MB for the stripe width
> (and so with swalloc we align to the first lun in the RAID0).

At minimum 4MB how does that affect journal writes which will be much
smaller, especially with a large file streaming workload, for which this
setup is appropriate?  Isn't the minimum a hard setting?  I.e. we can
never do an IO less than 4MB?  Do other layers of the stack use this
variable?  Are they expecting values this large?

> This should be passed up unchanged through the stack if none of the
> software layers are doing other geometry modifications (e.g. more
> raid, thinp, etc).

I agree, if RAID vendors all did the right thing...


<Prev in Thread] Current Thread [Next in Thread>