On 09/11/2012 05:48 AM, Colin Ian King wrote:
> I've seeing really slow I/O writes on xfs when doing a dd with a seek
> offset to a file on an xfs file system which is loop mounted.
> Reproduced on Linux 3.6.0-rc5 and 3.4
> How to reproduce:
> dd if=/dev/zero of=xfs.img bs=1M count=1024
> mkfs.xfs -f xfs.img
> sudo mount -o loop -t xfs xfs.img /mnt/test
> First create a large file, write performance is excellent:
> sudo dd if=/dev/zero of=/mnt/test/big bs=1M count=500
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.69451 s, 309 MB/s
> ..next seek and write some more blocks, write performance is poor:
> sudo dd if=/dev/zero of=/mnt/test/big obs=4K count=8192 seek=131072
> 8192+0 records in
> 1024+0 records out
> 4194304 bytes (4.2 MB) copied, 47.0644 s, 89.1 kB/s
I reproduced this behavior with a 1GB filesystem on a loop device. I
think the problem you're seeing could be circumstantial to the fact that
you're writing to a point where you are close to filling the fs.
Taking a look at the tracepoint data when running your second dd alone
vs. in succession to the first, I see a fairly clean
buffered_write->get_blocks pattern vs. a
In other words, you're triggering an internal space allocation failure
and flush sequence intended to free up space. Somebody else might be
able to chime in and more ably explain why that occurs following a
truncate (perhaps the space isn't freed until the change hits the log),
but regardless this doesn't seem to occur if you increase the size of
> Using blktrace and seektracer I've captured the I/O on the block device
> containing the xfs.img and I'm seeing ~55-70 seeks per second during the
> slow writes, which seems excessive.
> I can reproduce this on hardware with 1, 4 or 8 CPUs.
> I've testing this with other file systems I and don't see this issue, so
> it looks like an xfs + loop mounted issue.
> Is this a known performance "feature"?
> xfs mailing list