On Sat, Oct 25, 2014 at 12:35:17PM -0500, Stan Hoeppner wrote:
> If the same interface is used for Linux logical block devices (md, dm,
> lvm, etc) and hardware RAID, I have a hunch it may be better to
> determine that, if possible, before doing anything with these values.
> As you said previously, and I agree 100%, a lot of RAID vendors don't
> export meaningful information here. In this specific case, I think the
> RAID engineers are exporting a value, 1 MB, that works best for their
> cache management, or some other path in their firmware. They're
> concerned with host interface xfer into the controller, not the IOs on
> the back end to the disks. They don't see this as an end-to-end deal.
> In fact, I'd guess most of these folks see their device as performing
> magic, and it doesn't matter what comes in or goes out either end.
> "We'll take care of it."
Deja vu. This is an isochronous RAID array you are having trouble
with, isn't it?
FWIW, do your problems go away when you make you hardware LUN width
a multiple of the cache segment size?
> optimal_io_size. I'm guessing this has different meaning for different
> folks. You say optimal_io_size is the same as RAID width. Apply that
> to this case:
> hardware RAID 60 LUN, 4 arrays
> 16+2 RAID6, 256 KB stripe unit, 4096 KB stripe width
> 16 MB LUN stripe width
> optimal_io_size = 16 MB
> Is that an appropriate value for optimal_io_size even if this is the
> RAID width? I'm not saying it isn't. I don't know. I don't know what
> other layers of the Linux and RAID firmware stacks are affected by this,
> nor how they're affected.
yup, i'd expect minimum = 4MB (i.e stripe unit 4MB so we align to
the underlying RAID6 luns) and optimal = 16MB for the stripe width
(and so with swalloc we align to the first lun in the RAID0).
This should be passed up unchanged through the stack if none of the
software layers are doing other geometry modifications (e.g. more
raid, thinp, etc).