better perf and memory uage for xfs_fsr? Trivial patch against xfstools-3.16 included...

Linda Walsh xfs at tlinx.org
Fri Nov 9 01:10:26 CST 2012



Dave Chinner wrote:
> On Thu, Nov 08, 2012 at 12:30:11PM -0800, Linda Walsh wrote:
>> FWIW, the benefit, probably comes from the read-file, as the written file
>> is written with DIRECT I/O and I can't see that it should make a difference
>> there.
> 
> Hmmm, so it does. I think that's probably the bug that needs to be
> fixed, not so much using posix_fadvise....
---
	Well... using direct I/O might be another way of fixing it...
but I notice that neither the reads nor the writes seem to use the optimal
I/O size that takes into consideration RAID alignment.  It aligns for memory
alignment and aligns for a 2-4k device alignment, but doesn't seem to take
into consideration minor things like a 64k strip-unit x 12-wide-data-width
(768k).. if you do direct I/O. might want to be sure to RAID align it...


	Doing <64k at a time would cause heinous perf... while using
the SEQUENTIAL+READ-ONCE params seem to cause a notable I/O smoothing
(no dips/valleys on the I/O charts), though I don't know how much
(if any) real performance increase (or decrease) there was, as setting
up exactly fragmentation cases would be a pain...

	If you do LARGE I/O's on the READs.. say 256MB at a time, I
don't think exact alignment will matter that much, but I notice speed
improvements up to a 1GB buffer size in reads + writes in 'dd' using
direct I/O (couldn't test larger size, as device driver doesn't seem
to allow anything > 2GB-8k.. (this on a 64bit machine)
at least I think it is the dev.driver, hasn't been important enough
to chase down.

	While such large buffers might be bad on a memory tight
machine, on many 64-bit machines, it's well worth the throughput
and lower disk-transfer-time usage.  Meanwhile, that posix
call added on the read side really does seem to benefit...
Try it, you'll like it!  ;-) (not to say it is the 'best' fix,
but it's pretty low cost!)...

> Cheers,
> 
> Dave.



More information about the xfs mailing list