xfs
[Top] [All Lists]

[XFS updates] XFS development tree branch, for-linus, updated. v2.6.36-r

To: xfs@xxxxxxxxxxx
Subject: [XFS updates] XFS development tree branch, for-linus, updated. v2.6.36-rc8-95-g39dc948
From: xfs@xxxxxxxxxxx
Date: Thu, 21 Oct 2010 12:15:31 -0500
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "XFS development tree".

The branch, for-linus has been updated
  a731cd1 xfs: semaphore cleanup
  6743099 xfs: Extend project quotas to support 32bit project ids
  1a1a3e9 xfs: remove xfs_buf wrappers
  6c77b0e xfs: remove xfs_cred.h
  78a4b09 xfs: remove xfs_globals.h
  668332e xfs: remove xfs_version.h
  1ae4fe6 xfs: remove xfs_refcache.h
  4957a44 xfs: fix the xfs_trans_committed
  dfe188d xfs: remove unused t_callback field in struct xfs_trans
  d276734 xfs: fix bogus m_maxagi check in xfs_iget
  1b04071 xfs: do not use xfs_mod_incore_sb_batch for per-cpu counters
  96540c7 xfs: do not use xfs_mod_incore_sb for per-cpu counters
  61ba35d xfs: remove XFS_MOUNT_NO_PERCPU_SB
  50f59e8 xfs: pack xfs_buf structure more tightly
  74f75a0 xfs: convert buffer cache hash to rbtree
  69b491c xfs: serialise inode reclaim within an AG
  e3a20c0 xfs: batch inode reclaim lookup
  78ae525 xfs: implement batched inode lookups for AG walking
  e13de95 xfs: split out inode walk inode grabbing
  65d0f20 xfs: split inode AG walking into separate code for reclaim
  69d6cc7 xfs: remove buftarg hash for external devices
  1922c94 xfs: use unhashed buffers for size checks
  26af655 xfs: kill XBF_FS_MANAGED buffers
  ebad861 xfs: store xfs_mount in the buftarg instead of in the xfs_buf
  5adc94c xfs: introduced uncached buffer read primitve
  686865f xfs: rename xfs_buf_get_nodaddr to be more appropriate
  dcd79a1 xfs: don't use vfs writeback for pure metadata modifications
  e176579 xfs: lockless per-ag lookups
  bd32d25 xfs: remove debug assert for per-ag reference counting
  d1583a3 xfs: reduce the number of CIL lock round trips during commit
  9c16991 xfs: eliminate some newly-reported gcc warnings
  c0e59e1 xfs: remove the ->kill_root btree operation
  acecf1b xfs: stop using xfs_qm_dqtobp in xfs_qm_dqflush
  52fda11 xfs: simplify xfs_qm_dqusage_adjust
  4472235 xfs: Introduce XFS_IOC_ZERO_RANGE
  3ae4c9d xfs: use range primitives for xfs page cache operations
      from  081003fff467ea0e727f66d5d435b4f473a789b3 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
commit a731cd116c9334e01bcf3e676c0c621fe7de6ce4
Author: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Date:   Tue Sep 7 14:33:15 2010 +0000

    xfs: semaphore cleanup
    
    Get rid of init_MUTEX[_LOCKED]() and use sema_init() instead.
    
    (Ported to current XFS code by <aelder@xxxxxxx>.)
    
    Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 6743099ce57a40509a86849a22317ed4b7516911
Author: Arkadiusz Mi?kiewicz <arekm@xxxxxxxx>
Date:   Sun Sep 26 06:10:18 2010 +0000

    xfs: Extend project quotas to support 32bit project ids
    
    This patch adds support for 32bit project quota identifiers.
    
    On disk format is backward compatible with 16bit projid numbers. projid
    on disk is now kept in two 16bit values - di_projid_lo (which holds the
    same position as old 16bit projid value) and new di_projid_hi (takes
    existing padding) and converts from/to 32bit value on the fly.
    
    xfs_admin (for existing fs), mkfs.xfs (for new fs) needs to be used
    to enable PROJID32BIT support.
    
    Signed-off-by: Arkadiusz MiÅ?kiewicz <arekm@xxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 1a1a3e97bad42e92cd2f32e81c396c8ee0bddb28
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:18 2010 +0000

    xfs: remove xfs_buf wrappers
    
    Stop having two different names for many buffer functions and use
    the more descriptive xfs_buf_* names directly.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 6c77b0ea1bdf85dfd48c20ceb10fd215a95c66e2
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:17 2010 +0000

    xfs: remove xfs_cred.h
    
    We're not actually passing around credentials inside XFS for a while
    now, so remove all xfs_cred.h with it's cred_t typedef and all
    instances of it.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 78a4b0961ff241c6f23b16329db0d67e97cb86a7
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:16 2010 +0000

    xfs: remove xfs_globals.h
    
    This header only provides one extern that isn't actually declared
    anywhere, and shadowed by a macro.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 668332e5fec809bb100da619fda80e033b12b4a7
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:15 2010 +0000

    xfs: remove xfs_version.h
    
    It used to have a place when it contained an automatically generated
    CVS version, but these days it's entirely superflous.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 1ae4fe6dba24ebabcd12cd0fa45cc5955394cbd8
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:14 2010 +0000

    xfs: remove xfs_refcache.h
    
    This header has been completely unused for a couple of years.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 4957a449a1bce2f5095f57f84114dc038a8f08d5
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:13 2010 +0000

    xfs: fix the xfs_trans_committed
    
    Use the correct prototype for xfs_trans_committed instead of casting it.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit dfe188d4283752086d48380cde40d9801c318667
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Oct 6 18:41:12 2010 +0000

    xfs: remove unused t_callback field in struct xfs_trans
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit d276734d937a649ff43fd197d0df7a747bd55b7e
Author: Christoph Hellwig <hch@xxxxxx>
Date:   Wed Oct 6 18:31:23 2010 +0000

    xfs: fix bogus m_maxagi check in xfs_iget
    
    These days inode64 should only control which AGs we allocate new
    inodes from, while we still try to support reading all existing
    inodes.  To make this actually work the check ontop of xfs_iget
    needs to be relaxed to allow inodes in all allocation groups instead
    of just those that we allow allocating inodes from.  Note that we
    can't simply remove the check - it prevents us from accessing
    invalid data when fed invalid inode numbers from NFS or bulkstat.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 1b0407125f9a5be63e861eb27c8af9e32f20619c
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Thu Sep 30 02:25:56 2010 +0000

    xfs: do not use xfs_mod_incore_sb_batch for per-cpu counters
    
    Update the per-cpu counters manually in xfs_trans_unreserve_and_mod_sb
    and remove support for per-cpu counters from xfs_mod_incore_sb_batch
    to simplify it.  And added benefit is that we don't have to take
    m_sb_lock for transactions that only modify per-cpu counters.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 96540c78583a417113df4d027e6b68a595ab9a09
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Thu Sep 30 02:25:55 2010 +0000

    xfs: do not use xfs_mod_incore_sb for per-cpu counters
    
    Export xfs_icsb_modify_counters and always use it for modifying
    the per-cpu counters.  Remove support for per-cpu counters from
    xfs_mod_incore_sb to simplify it.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 61ba35dea0593fbc8d062cab3e4c4c3da5ce7104
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Thu Sep 30 02:25:54 2010 +0000

    xfs: remove XFS_MOUNT_NO_PERCPU_SB
    
    Fail the mount if we can't allocate memory for the per-CPU counters.
    This is consistent with how we handle everything else in the mount
    path and makes the superblock counter modification a lot simpler.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 50f59e8eed85ec4c79bc2454ed50c7886f6c5ebf
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 19:59:15 2010 +1000

    xfs: pack xfs_buf structure more tightly
    
    pahole reports the struct xfs_buf has quite a few holes in it, so
    packing the structure better will reduce the size of it by 16 bytes.
    Also, move all the fields used in cache lookups into the first
    cacheline.
    
    Before on x86_64:
    
            /* size: 320, cachelines: 5 */
        /* sum members: 298, holes: 6, sum holes: 22 */
    
    After on x86_64:
    
            /* size: 304, cachelines: 5 */
        /* padding: 6 */
        /* last cacheline: 48 bytes */
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 74f75a0cb7033918eb0fa4a50df25091ac75c16e
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 19:59:04 2010 +1000

    xfs: convert buffer cache hash to rbtree
    
    The buffer cache hash is showing typical hash scalability problems.
    In large scale testing the number of cached items growing far larger
    than the hash can efficiently handle. Hence we need to move to a
    self-scaling cache indexing mechanism.
    
    I have selected rbtrees for indexing becuse they can have O(log n)
    search scalability, and insert and remove cost is not excessive,
    even on large trees. Hence we should be able to cache large numbers
    of buffers without incurring the excessive cache miss search
    penalties that the hash is imposing on us.
    
    To ensure we still have parallel access to the cache, we need
    multiple trees. Rather than hashing the buffers by disk address to
    select a tree, it seems more sensible to separate trees by typical
    access patterns. Most operations use buffers from within a single AG
    at a time, so rather than searching lots of different lists,
    separate the buffer indexes out into per-AG rbtrees. This means that
    searches during metadata operation have a much higher chance of
    hitting cache resident nodes, and that updates of the tree are less
    likely to disturb trees being accessed on other CPUs doing
    independent operations.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 69b491c214d7fd4d4df972ae5377be99ca3753db
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Mon Sep 27 11:09:51 2010 +1000

    xfs: serialise inode reclaim within an AG
    
    Memory reclaim via shrinkers has a terrible habit of having N+M
    concurrent shrinker executions (N = num CPUs, M = num kswapds) all
    trying to shrink the same cache. When the cache they are all working
    on is protected by a single spinlock, massive contention an
    slowdowns occur.
    
    Wrap the per-ag inode caches with a reclaim mutex to serialise
    reclaim access to the AG. This will block concurrent reclaim in each
    AG but still allow reclaim to scan multiple AGs concurrently. Allow
    shrinkers to move on to the next AG if it can't get the lock, and if
    we can't get any AG, then start blocking on locks.
    
    To prevent reclaimers from continually scanning the same inodes in
    each AG, add a cursor that tracks where the last reclaim got up to
    and start from that point on the next reclaim. This should avoid
    only ever scanning a small number of inodes at the satart of each AG
    and not making progress. If we have a non-shrinker based reclaim
    pass, ignore the cursor and reset it to zero once we are done.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit e3a20c0b02e1704ab115dfa9d012caf0fbc45ed0
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 19:51:50 2010 +1000

    xfs: batch inode reclaim lookup
    
    Batch and optimise the per-ag inode lookup for reclaim to minimise
    scanning overhead. This involves gang lookups on the radix trees to
    get multiple inodes during each tree walk, and tighter validation of
    what inodes can be reclaimed without blocking befor we take any
    locks.
    
    This is based on ideas suggested in a proof-of-concept patch
    posted by Nick Piggin.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 78ae5256768b91f25ce7a4eb9f56d563e302cc10
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Sep 28 12:28:19 2010 +1000

    xfs: implement batched inode lookups for AG walking
    
    With the reclaim code separated from the generic walking code, it is
    simple to implement batched lookups for the generic walk code.
    Separate out the inode validation from the execute operations and
    modify the tree lookups to get a batch of inodes at a time.
    
    Reclaim operations will be optimised separately.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit e13de955ca67b0bd1cec9a2f9352a3053065bf7f
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Sep 28 12:28:06 2010 +1000

    xfs: split out inode walk inode grabbing
    
    When doing read side inode cache walks, the code to validate and
    grab an inode is common to all callers. Split it out of the execute
    callbacks in preparation for batching lookups. Similarly, split out
    the inode reference dropping from the execute callbacks into the
    main lookup look to be symmetric with the grab.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 65d0f20533c503b50bd5e7e86434512af7761eea
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 18:40:15 2010 +1000

    xfs: split inode AG walking into separate code for reclaim
    
    The reclaim walk requires different locking and has a slightly
    different walk algorithm, so separate it out so that it can be
    optimised separately.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 69d6cc76cff3573ceefda178b75e20878866fdc3
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Wed Sep 22 10:47:20 2010 +1000

    xfs: remove buftarg hash for external devices
    
    For RT and external log devices, we never use hashed buffers on them
    now.  Remove the buftarg hash tables that are set up for them.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 1922c949c59f93beb560d59874bcc6d5c00115ac
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Wed Sep 22 10:47:20 2010 +1000

    xfs: use unhashed buffers for size checks
    
    When we are checking we can access the last block of each device, we
    do not need to use cached buffers as they will be tossed away
    immediately. Use uncached buffers for size checks so that all IO
    prior to full in-memory structure initialisation does not use the
    buffer cache.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 26af655233dd486659235f3049959d2f7dafc5a1
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Wed Sep 22 10:47:20 2010 +1000

    xfs: kill XBF_FS_MANAGED buffers
    
    Filesystem level managed buffers are buffers that have their
    lifecycle controlled by the filesystem layer, not the buffer cache.
    We currently cache these buffers, which makes cleanup and cache
    walking somewhat troublesome. Convert the fs managed buffers to
    uncached buffers obtained by via xfs_buf_get_uncached(), and remove
    the XBF_FS_MANAGED special cases from the buffer cache.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit ebad861b5702c3e2332a3e906978f47144d22f70
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Wed Sep 22 10:47:20 2010 +1000

    xfs: store xfs_mount in the buftarg instead of in the xfs_buf
    
    Each buffer contains both a buftarg pointer and a mount pointer. If
    we add a mount pointer into the buftarg, we can avoid needing the
    b_mount field in every buffer and grab it from the buftarg when
    needed instead. This shrinks the xfs_buf by 8 bytes.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 5adc94c247c3779782c7b0b8b5e28cf50596eb37
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 21:58:31 2010 +1000

    xfs: introduced uncached buffer read primitve
    
    To avoid the need to use cached buffers for single-shot or buffers
    cached at the filesystem level, introduce a new buffer read
    primitive that bypasses the cache an reads directly from disk.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 686865f76e35b28ba7aa6afa19209426f0da6201
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 20:07:47 2010 +1000

    xfs: rename xfs_buf_get_nodaddr to be more appropriate
    
    xfs_buf_get_nodaddr() is really used to allocate a buffer that is
    uncached. While it is not directly assigned a disk address, the fact
    that they are not cached is a more important distinction. With the
    upcoming uncached buffer read primitive, we should be consistent
    with this disctinction.
    
    While there, make page allocation in xfs_buf_get_nodaddr() safe
    against memory reclaim re-entrancy into the filesystem by allowing
    a flags parameter to be passed.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit dcd79a1423f64ee0184629874805c3ac40f3a2c5
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Sep 28 12:27:25 2010 +1000

    xfs: don't use vfs writeback for pure metadata modifications
    
    Under heavy multi-way parallel create workloads, the VFS struggles
    to write back all the inodes that have been changed in age order.
    The bdi flusher thread becomes CPU bound, spending 85% of it's time
    in the VFS code, mostly traversing the superblock dirty inode list
    to separate dirty inodes old enough to flush.
    
    We already keep an index of all metadata changes in age order - in
    the AIL - and continued log pressure will do age ordered writeback
    without any extra overhead at all. If there is no pressure on the
    log, the xfssyncd will periodically write back metadata in ascending
    disk address offset order so will be very efficient.
    
    Hence we can stop marking VFS inodes dirty during transaction commit
    or when changing timestamps during transactions. This will keep the
    inodes in the superblock dirty list to those containing data or
    unlogged metadata changes.
    
    However, the timstamp changes are slightly more complex than this -
    there are a couple of places that do unlogged updates of the
    timestamps, and the VFS need to be informed of these. Hence add a
    new function xfs_trans_ichgtime() for transactional changes,
    and leave xfs_ichgtime() for the non-transactional changes.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>

commit e176579e70118ed7cfdb60f963628fe0ca771f3d
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Wed Sep 22 10:47:20 2010 +1000

    xfs: lockless per-ag lookups
    
    When we start taking a reference to the per-ag for every cached
    buffer in the system, kernel lockstat profiling on an 8-way create
    workload shows the mp->m_perag_lock has higher acquisition rates
    than the inode lock and has significantly more contention. That is,
    it becomes the highest contended lock in the system.
    
    The perag lookup is trivial to convert to lock-less RCU lookups
    because perag structures never go away. Hence the only thing we need
    to protect against is tree structure changes during a grow. This can
    be done simply by replacing the locking in xfs_perag_get() with RCU
    read locking. This removes the mp->m_perag_lock completely from this
    path.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit bd32d25a7cf7242512e77e70bab63df4402ab91c
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Wed Sep 22 10:47:20 2010 +1000

    xfs: remove debug assert for per-ag reference counting
    
    When we start taking references per cached buffer to the the perag
    it is cached on, it will blow the current debug maximum reference
    count assert out of the water. The assert has never caught a bug,
    and we have tracing to track changes if there ever is a problem,
    so just remove it.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit d1583a3833290ab9f8b13a064acbb5e508c59f60
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Sep 24 18:14:13 2010 +1000

    xfs: reduce the number of CIL lock round trips during commit
    
    When commiting a transaction, we do a lock CIL state lock round trip
    on every single log vector we insert into the CIL. This is resulting
    in the lock being as hot as the inode and dcache locks on 8-way
    create workloads. Rework the insertion loops to bring the number
    of lock round trips to one per transaction for log vectors, and one
    more do the busy extents.
    
    Also change the allocation of the log vector buffer not to zero it
    as we copy over the entire allocated buffer anyway.
    
    This patch also includes a structural cleanup to the CIL item
    insertion provided by Christoph Hellwig.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Alex Elder <aelder@xxxxxxx>

commit 9c169915ad374cd9efb1556943b2074ec07e1749
Author: Poyo VL <poyo_vl@xxxxxxxxx>
Date:   Thu Sep 2 07:41:55 2010 +0000

    xfs: eliminate some newly-reported gcc warnings
    
    Ionut Gabriel Popescu <poyo_vl@xxxxxxxxx> submitted a simple change
    to eliminate some "may be used uninitialized" warnings when building
    XFS.  The reported condition seems to be something that GCC did not
    used to recognize or report.  The warnings were produced by:
    
        gcc version 4.5.0 20100604
        [gcc-4_5-branch revision 160292] (SUSE Linux)
    
    Signed-off-by: Ionut Gabriel Popescu <poyo_vl@xxxxxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit c0e59e1ac0a106bbab93404024bb6e7927ad9d6d
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Tue Sep 7 23:34:07 2010 +0000

    xfs: remove the ->kill_root btree operation
    
    The implementation os ->kill_root only differ by either simply
    zeroing out the now unused buffer in the btree cursor in the inode
    allocation btree or using xfs_btree_setbuf in the allocation btree.
    
    Initially both of them used xfs_btree_setbuf, but the use in the
    ialloc btree was removed early on because it interacted badly with
    xfs_trans_binval.
    
    In addition to zeroing out the buffer in the cursor xfs_btree_setbuf
    updates the bc_ra array in the btree cursor, and calls
    xfs_trans_brelse on the buffer previous occupying the slot.
    
    The bc_ra update should be done for the alloc btree updated too,
    although the lack of it does not cause serious problems.  The
    xfs_trans_brelse call on the other hand is effectively a no-op in
    the end - it keeps decrementing the bli_recur refcount until it hits
    zero, and then just skips out because the buffer will always be
    dirty at this point.  So removing it for the allocation btree is
    just fine.
    
    So unify the code and move it to xfs_btree.c.  While we're at it
    also replace the call to xfs_btree_setbuf with a NULL bp argument in
    xfs_btree_del_cursor with a direct call to xfs_trans_brelse given
    that the cursor is beeing freed just after this and the state
    updates are superflous.  After this xfs_btree_setbuf is only used
    with a non-NULL bp argument and can thus be simplified.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit acecf1b5d8a846bf818bf74df454330f0b444b0a
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Mon Sep 6 01:44:45 2010 +0000

    xfs: stop using xfs_qm_dqtobp in xfs_qm_dqflush
    
    In xfs_qm_dqflush we know that q_blkno must be initialized already from a
    previous xfs_qm_dqread.  So instead of calling xfs_qm_dqtobp we can
    simply read the quota buffer directly.  This also saves us from a duplicate
    xfs_qm_dqcheck call check and allows xfs_qm_dqtobp to be simplified now
    that it is always called for a newly initialized inode.  In addition to
    that properly unwind all locks in xfs_qm_dqflush when xfs_qm_dqcheck
    fails.
    
    This mirrors a similar cleanup in the inode lookup done earlier.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 52fda114249578311776b25da9f73a9c34f4fd8c
Author: Christoph Hellwig <hch@xxxxxx>
Date:   Mon Sep 6 01:44:22 2010 +0000

    xfs: simplify xfs_qm_dqusage_adjust
    
    There is no need to have the users and group/project quota locked at the
    same time.  Get rid of xfs_qm_dqget_noattach and just do a xfs_qm_dqget
    inside xfs_qm_quotacheck_dqadjust for the quota we are operating on
    right now.  The new version of xfs_qm_quotacheck_dqadjust holds the
    inode lock over it's operations, which is not a problem as it simply
    increments counters and there is no concern about log contention
    during mount time.
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 447223520520b17d3b6d0631aa4838fbaf8eddb4
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Aug 24 12:02:11 2010 +1000

    xfs: Introduce XFS_IOC_ZERO_RANGE
    
    XFS_IOC_ZERO_RANGE is the equivalent of an atomic XFS_IOC_UNRESVSP/
    XFS_IOC_RESVSP call pair. It enabled ranges of written data to be
    turned into zeroes without requiring IO or having to free and
    reallocate the extents in the range given as would occur if we had
    to punch and then preallocate them separately.  This enables
    applications to zero parts of files very quickly without changing
    the layout of the files in any way.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>

commit 3ae4c9deb30a8d5ee305b461625dcb298c9804a9
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Aug 24 12:01:50 2010 +1000

    xfs: use range primitives for xfs page cache operations
    
    While XFS passes ranges to operate on from the core code, the
    functions being called ignore the either the entire range or the end
    of the range. This is historical because when the function were
    written linux didn't have the necessary range operations. Update the
    functions to use the correct operations.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>

-----------------------------------------------------------------------

Summary of changes:
 fs/xfs/linux-2.6/xfs_buf.c     |  219 ++++++++++++----------
 fs/xfs/linux-2.6/xfs_buf.h     |   70 +++----
 fs/xfs/linux-2.6/xfs_cred.h    |   28 ---
 fs/xfs/linux-2.6/xfs_fs_subr.c |   31 ++--
 fs/xfs/linux-2.6/xfs_globals.c |    1 -
 fs/xfs/linux-2.6/xfs_globals.h |   23 ---
 fs/xfs/linux-2.6/xfs_ioctl.c   |   19 +-
 fs/xfs/linux-2.6/xfs_ioctl32.c |    5 +-
 fs/xfs/linux-2.6/xfs_ioctl32.h |    6 +-
 fs/xfs/linux-2.6/xfs_iops.c    |   39 +----
 fs/xfs/linux-2.6/xfs_linux.h   |    5 +-
 fs/xfs/linux-2.6/xfs_super.c   |   24 +--
 fs/xfs/linux-2.6/xfs_super.h   |    1 +
 fs/xfs/linux-2.6/xfs_sync.c    |  413 +++++++++++++++++++++++-----------------
 fs/xfs/linux-2.6/xfs_sync.h    |    4 +-
 fs/xfs/linux-2.6/xfs_trace.h   |    4 +-
 fs/xfs/linux-2.6/xfs_version.h |   29 ---
 fs/xfs/quota/xfs_dquot.c       |  164 ++++++++---------
 fs/xfs/quota/xfs_qm.c          |  221 +++++++---------------
 fs/xfs/quota/xfs_qm_bhv.c      |    2 +-
 fs/xfs/quota/xfs_qm_syscalls.c |   16 +--
 fs/xfs/xfs_ag.h                |    9 +
 fs/xfs/xfs_alloc.c             |    4 +-
 fs/xfs/xfs_alloc_btree.c       |   33 ----
 fs/xfs/xfs_attr.c              |   37 ++---
 fs/xfs/xfs_bmap.c              |   44 +++--
 fs/xfs/xfs_bmap.h              |    9 +-
 fs/xfs/xfs_btree.c             |   56 +++++-
 fs/xfs/xfs_btree.h             |   14 +--
 fs/xfs/xfs_buf_item.c          |    7 +-
 fs/xfs/xfs_da_btree.c          |    2 +-
 fs/xfs/xfs_dinode.h            |    5 +-
 fs/xfs/xfs_dir2_leaf.c         |    2 +-
 fs/xfs/xfs_fs.h                |    7 +-
 fs/xfs/xfs_fsops.c             |   14 +-
 fs/xfs/xfs_ialloc.c            |    2 +-
 fs/xfs/xfs_ialloc_btree.c      |   33 ----
 fs/xfs/xfs_iget.c              |    4 +-
 fs/xfs/xfs_inode.c             |   17 +-
 fs/xfs/xfs_inode.h             |   30 +++-
 fs/xfs/xfs_inode_item.c        |    9 -
 fs/xfs/xfs_itable.c            |    3 +-
 fs/xfs/xfs_log.c               |    5 +-
 fs/xfs/xfs_log_cil.c           |  232 ++++++++++++----------
 fs/xfs/xfs_log_recover.c       |   25 ++--
 fs/xfs/xfs_mount.c             |  308 ++++++++++++------------------
 fs/xfs/xfs_mount.h             |    9 +-
 fs/xfs/xfs_refcache.h          |   52 -----
 fs/xfs/xfs_rename.c            |   14 +-
 fs/xfs/xfs_rtalloc.c           |   29 ++--
 fs/xfs/xfs_sb.h                |   10 +-
 fs/xfs/xfs_trans.c             |   91 ++++++----
 fs/xfs/xfs_trans.h             |    3 +-
 fs/xfs/xfs_trans_buf.c         |    2 +-
 fs/xfs/xfs_trans_inode.c       |   30 +++
 fs/xfs/xfs_types.h             |    2 -
 fs/xfs/xfs_utils.c             |    9 +-
 fs/xfs/xfs_utils.h             |    3 +-
 fs/xfs/xfs_vnodeops.c          |   65 ++++---
 fs/xfs/xfs_vnodeops.h          |    6 +-
 60 files changed, 1185 insertions(+), 1375 deletions(-)
 delete mode 100644 fs/xfs/linux-2.6/xfs_cred.h
 delete mode 100644 fs/xfs/linux-2.6/xfs_globals.h
 delete mode 100644 fs/xfs/linux-2.6/xfs_version.h
 delete mode 100644 fs/xfs/xfs_refcache.h


hooks/post-receive
-- 
XFS development tree

<Prev in Thread] Current Thread [Next in Thread>
  • [XFS updates] XFS development tree branch, for-linus, updated. v2.6.36-rc8-95-g39dc948, xfs <=