xfs
[Top] [All Lists]

[XFS updates] XFS development tree branch, master, updated. v3.1-rc1-22-

To: xfs@xxxxxxxxxxx
Subject: [XFS updates] XFS development tree branch, master, updated. v3.1-rc1-22-g57b5a91
From: xfs@xxxxxxxxxxx
Date: Thu, 25 Aug 2011 20:18:25 -0500
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "XFS development tree".

The branch, master has been updated
  57b5a91 xfs: don't serialise adjacent concurrent direct IO appending writes
  37b652e xfs: don't serialise direct IO reads on page cache checks
  242d621 xfs: deprecate the nodelaylog mount option
      from  b6bede3b4cdfbd188557ab50fceec2e91d295edf (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
commit 57b5a91db28542a8d8a697b9e3da2bd0e062f7d3
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date:   Thu Aug 25 07:17:02 2011 +0000

    xfs: don't serialise adjacent concurrent direct IO appending writes
    
    For append write workloads, extending the file requires a certain
    amount of exclusive locking to be done up front to ensure sanity in
    things like ensuring that we've zeroed any allocated regions
    between the old EOF and the start of the new IO.
    
    For single threads, this typically isn't a problem, and for large
    IOs we don't serialise enough for it to be a problem for two
    threads on really fast block devices. However for smaller IO and
    larger thread counts we have a problem.
    
    Take 4 concurrent sequential, single block sized and aligned IOs.
    After the first IO is submitted but before it completes, we end up
    with this state:
    
            IO 1    IO 2    IO 3    IO 4
          +-------+-------+-------+-------+
          ^       ^
          |       |
          |       |
          |       |
          |       \- ip->i_new_size
          \- ip->i_size
    
    And the IO is done without exclusive locking because offset <=
    ip->i_size. When we submit IO 2, we see offset > ip->i_size, and
    grab the IO lock exclusive, because there is a chance we need to do
    EOF zeroing. However, there is already an IO in progress that avoids
    the need for IO zeroing because offset <= ip->i_new_size. hence we
    could avoid holding the IO lock exlcusive for this. Hence after
    submission of the second IO, we'd end up this state:
    
            IO 1    IO 2    IO 3    IO 4
          +-------+-------+-------+-------+
          ^               ^
          |               |
          |               |
          |               |
          |               \- ip->i_new_size
          \- ip->i_size
    
    There is no need to grab the i_mutex of the IO lock in exclusive
    mode if we don't need to invalidate the page cache. Taking these
    locks on every direct IO effective serialises them as taking the IO
    lock in exclusive mode has to wait for all shared holders to drop
    the lock. That only happens when IO is complete, so effective it
    prevents dispatch of concurrent direct IO writes to the same inode.
    
    And so you can see that for the third concurrent IO, we'd avoid
    exclusive locking for the same reason we avoided the exclusive lock
    for the second IO.
    
    Fixing this is a bit more complex than that, because we need to hold
    a write-submission local value of ip->i_new_size to that clearing
    the value is only done if no other thread has updated it before our
    IO completes.....
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 37b652ec6445be99d0193047d1eda129a1a315d3
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Thu Aug 25 07:17:01 2011 +0000

    xfs: don't serialise direct IO reads on page cache checks
    
    There is no need to grab the i_mutex of the IO lock in exclusive
    mode if we don't need to invalidate the page cache. Taking these
    locks on every direct IO effective serialises them as taking the IO
    lock in exclusive mode has to wait for all shared holders to drop
    the lock. That only happens when IO is complete, so effective it
    prevents dispatch of concurrent direct IO reads to the same inode.
    
    Fix this by taking the IO lock shared to check the page cache state,
    and only then drop it and take the IO lock exclusively if there is
    work to be done. Hence for the normal direct IO case, no exclusive
    locking will occur.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Tested-by: Joern Engel <joern@xxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

commit 242d621964dd8641df53f7d51d4c6ead655cc5a6
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date:   Wed Aug 24 05:57:51 2011 +0000

    xfs: deprecate the nodelaylog mount option
    
    Signed-off-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Signed-off-by: Alex Elder <aelder@xxxxxxx>

-----------------------------------------------------------------------

Summary of changes:
 fs/xfs/xfs_file.c  |   85 ++++++++++++++++++++++++++++++++++++++++-----------
 fs/xfs/xfs_super.c |    2 +
 2 files changed, 68 insertions(+), 19 deletions(-)


hooks/post-receive
-- 
XFS development tree

<Prev in Thread] Current Thread [Next in Thread>
  • [XFS updates] XFS development tree branch, master, updated. v3.1-rc1-22-g57b5a91, xfs <=