xfs
[Top] [All Lists]

Consistent throughput challenge -- fragmentation?

To: xfs@xxxxxxxxxxx
Subject: Consistent throughput challenge -- fragmentation?
From: Brian Cain <brian.cain@xxxxxxxxx>
Date: Mon, 25 Feb 2013 10:01:53 -0600
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=/N5aBSJcTWliRGxcXXCdbKgRgh5AEWxwviH9jB0Gc8k=; b=IRs0Ngi/bhuT5OQJ/h25+cWROEX6VfRDetxlchrzdQpTHBf6b5Hn4/399OCP4Qq5DV EdNNMu3J3lI1uy7+KSCu7LYdXdrYUTL3s44AYSOykDU3iXQ6f/HNc4H5eVW/Xju4oyqa 7L7uNHDvBlA6dBQ56fUajQS6d47k/sPuDlG5kF4VAk0Oq1owBCGBZrquf6HpirGoIxlc 1yNGhMak2ZFPwcbNxvqnnDSj91UevdFtcAxNacw+8xHRkW+KhbXxRTX3J74FkbPsX99x +ugRROQVKHNqmTlmZ/Mr3nvDk6UPDe8nK8YX+QbOGjMjxqVObWzXk7dHeNGf2fgR19b+ vwUg==
All,

I have been observing some odd behavior regarding write throughput to an XFS partition (the baseline kernel version is 2.6.32.27).  I see consistently high write throughput (close to the performance of the raw block device) to the filesystem immediately after a mkfs, but after a few test cycles, there is sporadic poor performance.

The test mechanism is like so:

[mkfs.xfs <blockdev>] (no flags/options, xfsprogs ver 3.1.1-0.1.36)
...
1. remove a previous test cycle's directory 
2. create a new directory
3. open/write/close a small file (4kb) in this directory
4. open/read/close this same small file (by the local NFS server)
5. open[O_DIRECT]/write/write/write/.../close a large file (anywhere from ~100MB to 200GB)

Step #5 contains the high-throughput metrics which becomes an order of magnitude worse several test cycles after a mkfs.  Omitting steps 1-3 does not show the poor performance behavior.

Can anyone provide any suggestions as to an explanation for the behavior or a way to mitigate it?  Running xfs_fsr didn't seem to improve the results.

I'm happy to share benchmarks, specific results data, or describe the hardware being used for the measurements if it's helpful.

--
-Brian
<Prev in Thread] Current Thread [Next in Thread>