xfs
[Top] [All Lists]

Re: better perf and memory uage for xfs_fsr? Trivial patch against xfsto

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: better perf and memory uage for xfs_fsr? Trivial patch against xfstools-3.16 included...
From: Linda Walsh <xfs@xxxxxxxxx>
Date: Thu, 08 Nov 2012 23:10:26 -0800
Cc: xfs-oss <xfs@xxxxxxxxxxx>
In-reply-to: <20121108213911.GS6434@dastard>
References: <509BAABF.3030608@xxxxxxxxx> <509C1653.7050906@xxxxxxxxx> <20121108213911.GS6434@dastard>
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.24) Gecko/20100228 Lightning/0.9 Thunderbird/2.0.0.24 Mnenhy/0.7.6.666


Dave Chinner wrote:
On Thu, Nov 08, 2012 at 12:30:11PM -0800, Linda Walsh wrote:
FWIW, the benefit, probably comes from the read-file, as the written file
is written with DIRECT I/O and I can't see that it should make a difference
there.

Hmmm, so it does. I think that's probably the bug that needs to be
fixed, not so much using posix_fadvise....
---
        Well... using direct I/O might be another way of fixing it...
but I notice that neither the reads nor the writes seem to use the optimal
I/O size that takes into consideration RAID alignment.  It aligns for memory
alignment and aligns for a 2-4k device alignment, but doesn't seem to take
into consideration minor things like a 64k strip-unit x 12-wide-data-width
(768k).. if you do direct I/O. might want to be sure to RAID align it...


        Doing <64k at a time would cause heinous perf... while using
the SEQUENTIAL+READ-ONCE params seem to cause a notable I/O smoothing
(no dips/valleys on the I/O charts), though I don't know how much
(if any) real performance increase (or decrease) there was, as setting
up exactly fragmentation cases would be a pain...

        If you do LARGE I/O's on the READs.. say 256MB at a time, I
don't think exact alignment will matter that much, but I notice speed
improvements up to a 1GB buffer size in reads + writes in 'dd' using
direct I/O (couldn't test larger size, as device driver doesn't seem
to allow anything > 2GB-8k.. (this on a 64bit machine)
at least I think it is the dev.driver, hasn't been important enough
to chase down.

        While such large buffers might be bad on a memory tight
machine, on many 64-bit machines, it's well worth the throughput
and lower disk-transfer-time usage.  Meanwhile, that posix
call added on the read side really does seem to benefit...
Try it, you'll like it!  ;-) (not to say it is the 'best' fix,
but it's pretty low cost!)...

Cheers,

Dave.

<Prev in Thread] Current Thread [Next in Thread>