On 2/25/2013 10:01 AM, Brian Cain wrote:
> I have been observing some odd behavior regarding write throughput to an
> XFS partition (the baseline kernel version is 188.8.131.52). I see
> consistently high write throughput (close to the performance of the raw
> block device) to the filesystem immediately after a mkfs, but after a few
> test cycles, there is sporadic poor performance.
> The test mechanism is like so:
> [mkfs.xfs <blockdev>] (no flags/options, xfsprogs ver 3.1.1-0.1.36)
> 1. remove a previous test cycle's directory
> 2. create a new directory
> 3. open/write/close a small file (4kb) in this directory
> 4. open/read/close this same small file (by the local NFS server)
> 5. open[O_DIRECT]/write/write/write/.../close a large file (anywhere from
> ~100MB to 200GB)
> Step #5 contains the high-throughput metrics which becomes an order of
> magnitude worse several test cycles after a mkfs. Omitting steps 1-3 does
> not show the poor performance behavior.
> Can anyone provide any suggestions as to an explanation for the behavior or
> a way to mitigate it? Running xfs_fsr didn't seem to improve the results.
The usual cause of such aged filesystem low performance is free space
fragmentation. xfs_fsr will defragment files, but in doing so it
*increases* free space fragmentation, thus won't help the situation.
> I'm happy to share benchmarks, specific results data, or describe the
> hardware being used for the measurements if it's helpful.
Paste the output of 'xfs_db -r -c freesp /dev/[device]' just before you
do the large file write. This will show us the free space distribution