[XFS updates] XFS development tree branch, master, updated. v3.1-rc1-78-gab03e6a
xfs at oss.sgi.com
xfs at oss.sgi.com
Wed Oct 5 07:06:19 CDT 2011
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "XFS development tree".
The branch, master has been updated
ab03e6a xfs: fix buffer flushing during unmount
6f76e76 xfs: optimize fsync on directories
edc3615 xfs: reduce the number of log forces from tail pushing
fcf219b xfs: Don't allocate new buffers on every call to _xfs_buf_find
86671da xfs: simplify xfs_trans_ijoin* again
from 91409f1253ecdc9368bddd6674a71141bbb188d8 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
commit ab03e6ad834d81f95f24f66231bfab6b9a8ef82c
Author: Christoph Hellwig <hch at infradead.org>
Date: Wed Sep 14 14:08:26 2011 +0000
xfs: fix buffer flushing during unmount
The code to flush buffers in the umount code is a bit iffy: we first
flush all delwri buffers out, but then might be able to queue up a
new one when logging the sb counts. On a normal shutdown that one
would get flushed out when doing the synchronous superblock write in
xfs_unmountfs_writesb, but we skip that one if the filesystem has
been shut down.
Fix this by moving the delwri list flushing until just before unmounting
the log, and while we're at it also remove the superflous delwri list
and buffer lru flusing for the rt and log device that can never have
cached or delwri buffers.
Signed-off-by: Christoph Hellwig <hch at lst.de>
Reported-by: Amit Sahrawat <amit.sahrawat83 at gmail.com>
Tested-by: Amit Sahrawat <amit.sahrawat83 at gmail.com>
Signed-off-by: Alex Elder <aelder at sgi.com>
commit 6f76e76852b85216d518d6163ff1e84bd73a624d
Author: Christoph Hellwig <hch at infradead.org>
Date: Sun Oct 2 14:25:16 2011 +0000
xfs: optimize fsync on directories
Directories are only updated transactionally, which means fsync only
needs to flush the log the inode is currently dirty, but not bother
with checking for dirty data, non-transactional updates, and most
importanly doesn't have to flush disk caches except as part of a
transaction commit.
While the first two optimizations can't easily be measured, the
latter actually makes a difference when doing lots of fsync that do
not actually have to commit the inode, e.g. because an earlier fsync
already pushed the log far enough.
The new xfs_dir_fsync is identical to xfs_nfs_commit_metadata except
for the prototype, but I'm not sure creating a common helper for the
two is worth it given how simple the functions are.
Signed-off-by: Christoph Hellwig <hch at lst.de>
Signed-off-by: Alex Elder <aelder at sgi.com>
commit edc3615f7fd97dc78ea2cd872f55c4b382c46bb5
Author: Dave Chinner <dchinner at redhat.com>
Date: Fri Sep 30 04:45:03 2011 +0000
xfs: reduce the number of log forces from tail pushing
The AIL push code will issue a log force on ever single push loop
that it exits and has encountered pinned items. It doesn't rescan
these pinned items until it revisits the AIL from the start. Hence
we only need to force the log once per walk from the start of the
AIL to the target LSN.
This results in numbers like this:
xs_push_ail_flush..... 1456
xs_log_force......... 1485
For an 8-way 50M inode create workload - almost all the log forces
are coming from the AIL pushing code.
Reduce the number of log forces by only forcing the log if the
previous walk found pinned buffers. This reduces the numbers to:
xs_push_ail_flush..... 665
xs_log_force......... 682
For the same test.
Signed-off-by: Dave Chinner <dchinner at redhat.com>
Reviewed-by: Christoph Hellwig <hch at lst.de>
Signed-off-by: Alex Elder <aelder at sgi.com>
commit fcf219b77f2cb05bc22fc3d6cf490629e40ccc39
Author: Dave Chinner <dchinner at redhat.com>
Date: Fri Sep 30 04:45:02 2011 +0000
xfs: Don't allocate new buffers on every call to _xfs_buf_find
Stats show that for an 8-way unlink @ ~80,000 unlinks/s we are doing
~1 million cache hit lookups to ~3000 buffer creates. That's almost
3 orders of magnitude more cahce hits than misses, so optimising for
cache hits is quite important. In the cache hit case, we do not need
to allocate a new buffer in case of a cache miss, so we are
effectively hitting the allocator for no good reason for vast the
majority of calls to _xfs_buf_find. 8-way create workloads are
showing similar cache hit/miss ratios.
The result is profiles that look like this:
samples pcnt function DSO
_______ _____ _______________________________ _________________
1036.00 10.0% _xfs_buf_find [kernel.kallsyms]
582.00 5.6% kmem_cache_alloc [kernel.kallsyms]
519.00 5.0% __memcpy [kernel.kallsyms]
468.00 4.5% __ticket_spin_lock [kernel.kallsyms]
388.00 3.7% kmem_cache_free [kernel.kallsyms]
331.00 3.2% xfs_log_commit_cil [kernel.kallsyms]
Further, there is a fair bit of work involved in initialising a new
buffer once a cache miss has occurred and we currently do that under
the rbtree spinlock. That increases spinlock hold time on what are
heavily used trees.
To fix this, remove the initialisation of the buffer from
_xfs_buf_find() and only allocate the new buffer once we've had a
cache miss. Initialise the buffer immediately after allocating it in
xfs_buf_get, too, so that is it ready for insert if we get another
cache miss after allocation. This minimises lock hold time and
avoids unnecessary allocator churn. The resulting profiles look
like:
samples pcnt function DSO
_______ _____ ___________________________ _________________
8111.00 9.1% _xfs_buf_find [kernel.kallsyms]
4380.00 4.9% __memcpy [kernel.kallsyms]
4341.00 4.8% __ticket_spin_lock [kernel.kallsyms]
3401.00 3.8% kmem_cache_alloc [kernel.kallsyms]
2856.00 3.2% xfs_log_commit_cil [kernel.kallsyms]
2625.00 2.9% __kmalloc [kernel.kallsyms]
2380.00 2.7% kfree [kernel.kallsyms]
2016.00 2.3% kmem_cache_free [kernel.kallsyms]
Showing a significant reduction in time spent doing allocation and
freeing from slabs (kmem_cache_alloc and kmem_cache_free).
Signed-off-by: Dave Chinner <dchinner at redhat.com>
Reviewed-by: Christoph Hellwig <hch at lst.de>
Signed-off-by: Alex Elder <aelder at sgi.com>
commit 86671dafd1b90d73c9f8453ea8ec35fbfce0418b
Author: Christoph Hellwig <hch at infradead.org>
Date: Mon Sep 19 15:00:54 2011 +0000
xfs: simplify xfs_trans_ijoin* again
There is no reason to keep a reference to the inode even if we unlock
it during transaction commit because we never drop a reference between
the ijoin and commit. Also use this fact to merge xfs_trans_ijoin_ref
back into xfs_trans_ijoin - the third argument decides if an unlock
is needed now.
I'm actually starting to wonder if allowing inodes to be unlocked
at transaction commit really is worth the effort. The only real
benefit is that they can be unlocked earlier when commiting a
synchronous transactions, but that could be solved by doing the
log force manually after the unlock, too.
Signed-off-by: Christoph Hellwig <hch at lst.de>
Signed-off-by: Alex Elder <aelder at sgi.com>
-----------------------------------------------------------------------
Summary of changes:
fs/xfs/xfs_attr.c | 28 +++++++++++++-------------
fs/xfs/xfs_bmap.c | 4 +-
fs/xfs/xfs_buf.c | 48 ++++++++++++++++++++++++++-------------------
fs/xfs/xfs_buf.h | 1 -
fs/xfs/xfs_dfrag.c | 4 +-
fs/xfs/xfs_dquot.c | 2 +-
fs/xfs/xfs_file.c | 33 +++++++++++++++++++++++++++++-
fs/xfs/xfs_inode.c | 6 ++--
fs/xfs/xfs_inode_item.c | 4 +--
fs/xfs/xfs_ioctl.c | 2 +-
fs/xfs/xfs_iomap.c | 6 ++--
fs/xfs/xfs_iops.c | 4 +-
fs/xfs/xfs_mount.c | 29 +++++++++------------------
fs/xfs/xfs_qm_syscalls.c | 2 +-
fs/xfs/xfs_rename.c | 8 +++---
fs/xfs/xfs_rtalloc.c | 10 ++++----
fs/xfs/xfs_super.c | 2 +-
fs/xfs/xfs_trace.h | 1 +
fs/xfs/xfs_trans.c | 2 +-
fs/xfs/xfs_trans.h | 3 +-
fs/xfs/xfs_trans_ail.c | 33 +++++++++++++++++++------------
fs/xfs/xfs_trans_inode.c | 25 ++++-------------------
fs/xfs/xfs_trans_priv.h | 1 +
fs/xfs/xfs_vnodeops.c | 34 ++++++++++++++++----------------
24 files changed, 155 insertions(+), 137 deletions(-)
hooks/post-receive
--
XFS development tree
More information about the xfs
mailing list