[Top] [All Lists]

Re: 2.6.39-rc3, 2.6.39-rc4: XFS lockup - regression since 2.6.38

To: linux-kernel@xxxxxxxxxxxxxxx, Markus Trippelsdorf <markus@xxxxxxxxxxxxxxx>, Bruno Prémont <bonbons@xxxxxxxxxxxxxxxxx>, xfs-masters@xxxxxxxxxxx, xfs@xxxxxxxxxxx, Christoph Hellwig <hch@xxxxxxxxxxxxx>, Alex Elder <aelder@xxxxxxx>, Dave Chinner <dchinner@xxxxxxxxxx>
Subject: Re: 2.6.39-rc3, 2.6.39-rc4: XFS lockup - regression since 2.6.38
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 5 May 2011 10:21:26 +1000
In-reply-to: <20110504005736.GA2958@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <20110423224403.5fd1136a@xxxxxxxxxxxx> <20110427050850.GG12436@dastard> <20110427182622.05a068a2@xxxxxxxxxxxx> <20110428194528.GA1627@xxxxxxxxxxxxxx> <20110429011929.GA13542@dastard> <20110504005736.GA2958@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Wed, May 04, 2011 at 12:57:36AM +0000, Jamie Heilman wrote:
> Dave Chinner wrote:
> > OK, so the common elements here appears to be root filesystems
> > with small log sizes, which means they are tail pushing all the
> > time metadata operations are in progress. Definitely seems like a
> > race in the AIL workqueue trigger mechanism. I'll see if I can
> > reproduce this and cook up a patch to fix it.
> Is there value in continuing to post sysrq-w, sysrq-l, xfs_info, and
> other assorted feedback wrt this issue?  I've had it happen twice now
> myself in the past week or so, though I have no reliable reproduction
> technique.  Just wondering if more data points will help isolate the
> cause, and if so, how to be prepared to get them.
> For whatever its worth, my last lockup was while running
> 2.6.39-rc5-00127-g1be6a1f with a preempt config without cgroups.

Can you all try the patch below? I've managed to trigger a couple of
xlog_wait() lockups in some controlled load tests. The lockups don't
appear to occur with the following patch to he race condition in
the AIL workqueue trigger.


Dave Chinner

xfs: fix race condition queuing AIL pushes

From: Dave Chinner <dchinner@xxxxxxxxxx>

The recent conversion of the xfsaild functionality to a work queue
introduced a hard-to-hit log space grant hang. The problem is that
the use of the XFS_AIL_PUSHING_BIT to determine whether a push is
currently in progress is racy.

When the AIL push work completes, it checked whether the target
changed and cleared the PUSHING bit to allow a new push to be
requeued. The race condition is as follows:

        Thread 1                push work

                                check ailp->xa_target unchanged
        update ailp->xa_target
        test/set PUSHING bit
        does not queue
                                clear PUSHING bit
                                does not requeue

Now that the push target is updated, new attempts to push the AIL
will not trigger as the push target will be the same, and hence
despite trying to push the AIL we won't ever wake it again.

The fix is to ensure that the AIL push work clears the PUSHING bit
before it checks if the target is unchanged.

As a result, both push triggers operate on the same test/set bit
criteria, so even if we race in the push work and miss the target
update, the thread requesting the push will still set the PUSHING
bit and queue the push work to occur. For safety sake, the same
queue check is done if the push work detects the target change,
though only one of the two will will queue new work due to the use
of test_and_set_bit() checks.

Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
 fs/xfs/xfs_trans_ail.c |   16 ++++++++++------
 1 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c
index acdb92f..b7606d9 100644
--- a/fs/xfs/xfs_trans_ail.c
+++ b/fs/xfs/xfs_trans_ail.c
@@ -486,15 +486,19 @@ xfs_ail_worker(
                ailp->xa_last_pushed_lsn = 0;
-                * Check for an updated push target before clearing the
-                * XFS_AIL_PUSHING_BIT. If the target changed, we've got more
-                * work to do. Wait a bit longer before starting that work.
+                * We clear the XFS_AIL_PUSHING_BIT first before checking
+                * whether the target has changed. If the target has changed,
+                * this pushes the requeue race directly onto the result of the
+                * atomic test/set bit, so we are guaranteed that either the
+                * the pusher that changed the target or ourselves will requeue
+                * the work (but not both).
+               clear_bit(XFS_AIL_PUSHING_BIT, &ailp->xa_flags);
-               if (ailp->xa_target == target) {
-                       clear_bit(XFS_AIL_PUSHING_BIT, &ailp->xa_flags);
+               if (ailp->xa_target == target ||
+                   (test_and_set_bit(XFS_AIL_PUSHING_BIT, &ailp->xa_flags)))
-               }
                tout = 50;
        } else if (XFS_LSN_CMP(lsn, target) >= 0) {

<Prev in Thread] Current Thread [Next in Thread>