On Sun, Jan 15, 2012 at 07:01:42PM -0500, Zheng Da wrote:
> I surprisedly found that writing data to a file (no appending) with direct
I'm not so sure.
> And systemtap shows me that xfs_inode.i_lock is locked exclusively in the
> following functions.
> 0xffffffff81289235 : xfs_file_aio_write_checks+0x45/0x1d0 [kernel]
Always taken, short time period.
> 0xffffffff81288b6a : xfs_aio_write_newsize_update+0x3a/0x90 [kernel]
Only ever taken when doing appending writes. Are you -sure- you are
not doing appending writes?
> 0xffffffff812829f4 : __xfs_get_blocks+0x94/0x4a0 [kernel]
And for direct IO writes, this will be the block mapping lookup so
What this says to me is that you are probably doing is lots of very
small concurrent write IOs, but I'm only guessing. Can you provide
your test case and a description of your test hardware so we can try
to reproduce the problem?
> 0xffffffff8129590a : xfs_log_dirty_inode+0x7a/0xe0 [kernel]
> xfs_log_dirty_inode is only invoked 3 times when I write 4G data to the
> file, so we can completely ignore it. But I'm not sure which of them is the
> major cause of the bad write performance or whether they are the cause of
> the bad performance. But it seems none of them are the main operations in
> direct io write.
> It seems to me that the lock might not be necessary for my case. It'll be
The locking is definitely necessary. We might be able to optimise it
to reduce the serialisation for the overwrite case if that really is
the problem, but there is a limit to how much concurrent IO you can
currently do to a single file. We really need a test case to be able
to make and test such optimisations, though.
> nice if I can disable the lock. Or is there any suggestion of achieving
> better write performance with multiple threads in XFS?
> I tried ext4 and it doesn't perform better than XFS. Does the problem exist
> in all FS?
I think you'll find XFS performs the best of the lot for this sort
of concurrent DIO write workload.