Consistent throughput challenge -- fragmentation?
Brian Cain
brian.cain at gmail.com
Mon Feb 25 16:06:55 CST 2013
On Mon, Feb 25, 2013 at 3:39 PM, Stan Hoeppner <stan at hardwarefreak.com>
wrote:
>
>
> > Can anyone provide any suggestions as to an explanation for the
behavior or
> > a way to mitigate it? Running xfs_fsr didn't seem to improve the
results.
>
> The usual cause of such aged filesystem low performance is free space
> fragmentation. xfs_fsr will defragment files, but in doing so it
> *increases* free space fragmentation, thus won't help the situation.
>
> > I'm happy to share benchmarks, specific results data, or describe the
> > hardware being used for the measurements if it's helpful.
>
> Paste the output of 'xfs_db -r -c freesp /dev/[device]' just before you
> do the large file write. This will show us the free space distribution
> histogram.
>
Running now...
Here's a single sample:
from to extents blocks pct
1 1 128 128 0.00
2 3 6 18 0.00
4 7 1 7 0.00
8 15 30 275 0.00
512 1023 1 528 0.01
2048 4095 1 2656 0.03
4194304 8388608 1 8388588 99.96
Not sure whether the cycle following this output experienced "only good"
results or if it included poor performing samples too. Is it only useful
to see the "freesp" output in cases where the poor performance occurred?
--
-Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20130225/32ba5169/attachment.html>
More information about the xfs
mailing list