xfs
[Top] [All Lists]

RE: XFS hung task in xfs_ail_push_all_sync() when unmounting FS after di

To: Dave Chinner <david@xxxxxxxxxxxxx>, Alex Lyakas <alex@xxxxxxxxxxxxxxxxx>
Subject: RE: XFS hung task in xfs_ail_push_all_sync() when unmounting FS after disk failure/recovery
From: Shyam Kaushik <shyam@xxxxxxxxxxxxxxxxx>
Date: Mon, 11 Apr 2016 20:22:14 +0530
Cc: Brian Foster <bfoster@xxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zadarastorage-com.20150623.gappssmtp.com; s=20150623; h=from:references:in-reply-to:mime-version:thread-index:date :message-id:subject:to:cc; bh=+3bpwAD0EYdfGHDLKI2uRaanpag6xIbUlbrkxYgNPyw=; b=w2ZGXaj7uSHpDRMcUz/bfJ8CmXkwINA0gTHur9sjpqLtYvIA4+RtVOm6k3fI2KOrln FqoJfwoKvDjfm3p1ftQQBafnY0cwdNoEazreV0nmnELa6K1boD9QQd+YSjMN8nFOmfU3 TUbvmpbxccmGa9YeKH17SzfHZ333AK5YvMqKVkabhLD2724QAOQ9RZ0xeoks6Fb4MSfG +lXbl66S7230AdgYbUXRi2VunQmwVowXxmZ2yMvlYTEDstm6d46fyQARrt73Vl0IPLQY EztzWkUjhjLFgnVtjedd30rPZsSlFv+Ma2V3OFsNuvXFugi0FRoDJ6kyA+WH46Oy9p0O 53cA==
In-reply-to: <20160411012127.GF567@dastard>
References: <f049419a2ab10f8e3c4fef0e4f4ca1ba@xxxxxxxxxxxxxx> <20160322121922.GA53693@xxxxxxxxxxxxxxx> <232dd85fde11d4ef1625f070eabfd167@xxxxxxxxxxxxxx> <20160408224648.GD567@dastard> <CAOcd+r0s3FfahxaqfJKuZ_D+0FbUHaKqi8b_bwGbXekCN3w_oQ@xxxxxxxxxxxxxx> <20160411012127.GF567@dastard>
Thread-index: AdGTkH/Z4umH15ffTqq0wVMyHRNp8AAcNprA
Hi Dave,

Do you plan to post a patch for the bug you discovered in xfs_iflush() to
goto abort_out when xfs_imap_to_bp() fails?

We can include this patch & see we can hit a recreate of this issue.

Thanks.

--Shyam

-----Original Message-----
From: Dave Chinner [mailto:david@xxxxxxxxxxxxx]
Sent: 11 April 2016 06:51
To: Alex Lyakas
Cc: Shyam Kaushik; Brian Foster; xfs@xxxxxxxxxxx
Subject: Re: XFS hung task in xfs_ail_push_all_sync() when unmounting FS
after disk failure/recovery

On Sun, Apr 10, 2016 at 09:40:29PM +0300, Alex Lyakas wrote:
> Hello Dave,
>
> On Sat, Apr 9, 2016 at 1:46 AM, Dave Chinner <david@xxxxxxxxxxxxx>
wrote:
> > On Fri, Apr 08, 2016 at 04:21:02PM +0530, Shyam Kaushik wrote:
> >> Hi Dave, Brian, Carlos,
> >>
> >> While trying to reproduce this issue I have been running into
different
> >> issues that are similar. Underlying issue remains the same when
backend to
> >> XFS is failed & we unmount XFS, we run into hung-task timeout
(180-secs)
> >> with stack like
> >>
> >> kernel: [14952.671131]  [<ffffffffc06a5f59>]
> >> xfs_ail_push_all_sync+0xa9/0xe0 [xfs]
> >> kernel: [14952.671139]  [<ffffffff810b26b0>] ?
> >> prepare_to_wait_event+0x110/0x110
> >> kernel: [14952.671181]  [<ffffffffc0690111>] xfs_unmountfs+0x61/0x1a0
> >> [xfs]
> >>
> >> while running trace-events, XFS ail push keeps looping around
> >>
> >>    xfsaild/dm-10-21143 [001] ...2 17878.555133: xfs_ilock_nowait: dev
> >> 253:10 ino 0x0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> >
> > Looks like either a stale inode (which should never reach the AIL)
> > or it's an inode that's been reclaimed and this is a use after free
> > situation. Given that we are failing IOs here, I'd suggest it's more
> > likely to be an IO failure that's caused a writeback problem, not an
> > interaction with stale inodes.
> >
> > So, look at xfs_iflush. If an IO fails, it is supposed to unlock the
> > inode by calling xfs_iflush_abort(), which will also remove it from
> > the AIL. This can also happen on reclaim of a dirty inode, and if so
> > we'll still reclaim the inode because reclaim assumes xfs_iflush()
> > cleans up properly.
> >
> > Which, apparently, it doesn't:
> >
> >         /*
> >          * Get the buffer containing the on-disk inode.
> >          */
> >         error = xfs_imap_to_bp(mp, NULL, &ip->i_imap, &dip, &bp,
XBF_TRYLOCK, 0);
> >         if (error || !bp) {
> >                 xfs_ifunlock(ip);
> >                 return error;
> >         }
> >
> > This looks like a bug - xfs_iflush hasn't aborted the inode
> > writeback on failure - it's just unlocked the flush lock. Hence it
> > has left the inode dirty in the AIL, and then the inode has probably
> > then been reclaimed, setting the inode number to zero.
> In our case, we do not reach this call, because XFS is already marked
> as "shutdown", so in our case we do:
>     /*
>      * This may have been unpinned because the filesystem is shutting
>      * down forcibly. If that's the case we must not write this inode
>      * to disk, because the log record didn't make it to disk.
>      *
>      * We also have to remove the log item from the AIL in this case,
>      * as we wait for an empty AIL as part of the unmount process.
>      */
>     if (XFS_FORCED_SHUTDOWN(mp)) {
>         error = -EIO;
>         goto abort_out;
>     }
>
> So we call xfs_iflush_abort, but due to "iip" being NULL (as Shyam
> mentioned earlier in this thread), we proceed directly to
> xfs_ifunlock(ip), which now becomes the same situation as you
> described above.

If you are getting this occuring, something else has already gone
wrong as you can't have a dirty inode without a log item attached to
it. So it appears to me that you are reporting a symptom of a
problem after it has occured, not the root cause. Maybe it is the
same root cause, maybe not. Either way, it doesn't help us solve any
problem.

> The comment clearly says "We also have to remove the log item from the
> AIL in this case, as we wait for an empty AIL as part of the unmount
> process." Could you perhaps point us at the code that is supposed to
> remove the log item from the AIL? Apparently this is not happening.

xfs_iflush_abort or xfs_iflush_done does that work.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>