XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
Christoph Hellwig
hch at infradead.org
Thu Apr 5 16:37:40 CDT 2012
Hi Stefan,
thanks for the detailed report.
The seekwatcher makes it very clear that XFS is spreading I/O over the
4 allocation groups, while ext4 isn't. There's a couple of reasons why
XFS is doing that, including to max out multiple devices in a
multi-device setup, and not totally killing read speed.
Can you try a few mount options for me both all together and if you have
some time also individually.
-o inode64
This allows inodes to be close to data even for >1TB
filesystems. It's something we hope to make the default soon.
-o filestreams
This keeps data written in a single directory group together.
Not sure your directories are large enough to really benefit
from it, but it's worth a try.
-o allocsize=4k
This disables the agressive file preallocation we do in XFS,
which sounds like it's not useful for your workload.
> I ran the tests with a current RHEL 6.2 kernel and also with a 3.3rc2
> kernel. Both of them exhibited the same behavior. The disk hardware
> used was a SmartArray p400 controller with 6x 10k rpm 300GB SAS disks
> in RAID 6. The server has plenty of RAM (64 GB).
For metadata intensive workloads like yours you would be much better
using a non-striping raid, e.g. concatentation and mirroring instead of
raid 5 or raid 6. I know this has a cost in terms of "wasted" space,
but for IOPs bound workload the difference is dramatic.
P.s. please ignore Peter - he's made himself a name as not only beeing
technically incompetent but also extremly abrasive. He is in no way
associated with the XFS development team.
More information about the xfs
mailing list