I thought there was a way to empirically check that the filesystem is
correctly aligned to RAID stripes, but my attempts fail.
I don't mean by looking at sunit and swidth from xfs_info, because that
would not detect if there is some LVM offset problem.
I am particularly interested for parity RAIDs in MD.
I was thinking at "iostat -x 1": if writes are aligned I shouldn't see
any reads from the drives in a parity RAID...
unfortunately this does not work:
- a dd streaming write test has almost no reads even when I mount with
"noalign", with sufficiently large stripe_cache_size such as 1024. If it
is smaller, always reads, even if xfs is aligned.
- a kernel untar will show lots of reads at any stripe_cache_size even
if I'm pretty sure I aligned the stripes correctly on my 1024k x 15 data
disks and the .tar.bz2 file was in cache. I tried with both xfs stripes
autodetection in 2.6.37-rc2 and by specifying su and sw values by hand,
which turned out to be the same; I was without LVM so I'm pretty sure
alignment was correct. Why are there still lots of reads in this case?
so I'm pretty clueless. Anyone has good suggestions?
PS, OT: do you confirm it's not a good idea to have agsize multiple of
stripe size like the mkfs warns you against? Today I offsetted it by +1
stripe unit (chunk) so that every AG begins on a different drive but
performances didn't improve noticeably. Wouldn't that cause more
unfilled stripes when writing?