On Mon, Jan 25, 2010 at 02:28:39PM -0600, bpm@xxxxxxx wrote:
> The original tests were done with the wsync mount option. I'm not
> really sure that it was necessary. Test case was "tar -xvf
> ImageMagick.tar". 'fdatasync' represents whether the export option
> controlling usage of write_inode_now vs fsync was set.
Ok. Btw, you need to call ->fsync with fdatasync = 0 for NFS as it
also wants to catch non-data changes to the inode. Doesn't matter
for XFS as we currently always force a full fsync, but I'm going to
change that soon.
> internal log, no wsync, no fdatasync
> 2m48.632s 2m59.676s 2m42.450s
>
> internal log, wsync, no fdatasync
> 3m1.320s 3m10.961s 2m53.560s
>
> internal log, wsync, fdatasync
> 1m40.191s 1m38.780s 1m35.758s
The wsync case always still includes either the ->fsync or write_inode
call, right? If we use wsync we shouldn't need either in theory as
the transactions already commit synchronously.
Anyway, given the massive improvements of ->fsync vs write_inode you
really should post that patch to the NFS list for discussion ASAP.
> > But all this affects metadata performance, and only for sync exports,
> > while the OP does a simple dd which is streaming data I/O and uses the
> > (extremly unsafe) async export operation that disables the write_inode
> > calls.
>
> Right. This might not apply to Emmanuel's problem. I've been wondering
> if a recent change to not hold the inode mutex over the sync helps in
> the streaming io case. Any idea?
It should help a bit. I'm not sure it can cause that much of a
difference for such a simple single-threaded workload. Emmanuel, is
there any chance you could try the latest 2.6.32-stable kernel or even
2.6.33-rc as those changes are included there?
|