xfs
[Top] [All Lists]

RE: XFS hung task in xfs_ail_push_all_sync() when unmounting FS after di

To: xfs@xxxxxxxxxxx
Subject: RE: XFS hung task in xfs_ail_push_all_sync() when unmounting FS after disk failure/recovery
From: Shyam Kaushik <shyam@xxxxxxxxxxxxxxxxx>
Date: Tue, 12 Apr 2016 10:50:06 +0530
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zadarastorage-com.20150623.gappssmtp.com; s=20150623; h=from:references:in-reply-to:mime-version:thread-index:date :message-id:subject:to; bh=g+0o6dglPFDirpTk7qhc75a/ZR+iKdqpl1rUpHKTdsg=; b=duf0ZM0Fkg/FiYw442EpAxNwmkYVPCd7HXrKKOEp3O3E6ewKO0mJPxXdWufSsJ2G2D puCa6l3kPyj6t746TuxVkcBp3sXn4RXOt7A9bO15DYvCUgcaFmWoJ1jchPGeAt3/uevw ZMWo4fVPnSwZ4qqmEOzn3O/a7r2tCqkWqh6oot9mHz1GT+vYiZb63kI1XISoDbjx0jX1 0Sk8MTuj2dhlXUFn/lP77jEqvXC49irrNvaF03EpxBupFsdGhjehRcxG+GFnWEvrsK6p PoyHsuiFiEqShUp8hmRNtN6wFlja+GacMzcawAuy6W5FCmhbuR2BIEb6J+vRlLZWTwyW D9jw==
In-reply-to: 1aa4e955d78e6932260fdfc55c83bb8e@xxxxxxxxxxxxxx
References: <f049419a2ab10f8e3c4fef0e4f4ca1ba@xxxxxxxxxxxxxx> <20160322121922.GA53693@xxxxxxxxxxxxxxx> <232dd85fde11d4ef1625f070eabfd167@xxxxxxxxxxxxxx> <20160408224648.GD567@dastard> <CAOcd+r0s3FfahxaqfJKuZ_D+0FbUHaKqi8b_bwGbXekCN3w_oQ@xxxxxxxxxxxxxx> <20160411012127.GF567@dastard> 1aa4e955d78e6932260fdfc55c83bb8e@xxxxxxxxxxxxxx
Thread-index: AdGTkH/Z4umH15ffTqq0wVMyHRNp8AAcNprAAB46G/A=
Hi Carlos,

xfs.stap that I have been using is here
(https://www.dropbox.com/s/dluse4s3a1c7dj3/xfs.stap?dl=0). It has
line-numbers matching xfs with printks I added. You will have to tweak it
a bit.

Thanks.

--Shyam

-----Original Message-----
From: Carlos Maiolino
Sent: Fri Apr 8 09:31:31 CDT 2016

> Hi Shyam,
>
> do you mind to share your systemtap script with us?
>
> I'd like to take a look on it, and probably Brian will be interested to.
-----Original Message-----
From: Dave Chinner [mailto:david@xxxxxxxxxxxxx]
Sent: 11 April 2016 06:51
To: Alex Lyakas
Cc: Shyam Kaushik; Brian Foster; xfs@xxxxxxxxxxx
Subject: Re: XFS hung task in xfs_ail_push_all_sync() when unmounting FS
after disk failure/recovery

On Sun, Apr 10, 2016 at 09:40:29PM +0300, Alex Lyakas wrote:
> Hello Dave,
>
> On Sat, Apr 9, 2016 at 1:46 AM, Dave Chinner <david@xxxxxxxxxxxxx>
wrote:
> > On Fri, Apr 08, 2016 at 04:21:02PM +0530, Shyam Kaushik wrote:
> >> Hi Dave, Brian, Carlos,
> >>
> >> While trying to reproduce this issue I have been running into
different
> >> issues that are similar. Underlying issue remains the same when
backend to
> >> XFS is failed & we unmount XFS, we run into hung-task timeout
(180-secs)
> >> with stack like
> >>
> >> kernel: [14952.671131]  [<ffffffffc06a5f59>]
> >> xfs_ail_push_all_sync+0xa9/0xe0 [xfs]
> >> kernel: [14952.671139]  [<ffffffff810b26b0>] ?
> >> prepare_to_wait_event+0x110/0x110
> >> kernel: [14952.671181]  [<ffffffffc0690111>] xfs_unmountfs+0x61/0x1a0
> >> [xfs]
> >>
> >> while running trace-events, XFS ail push keeps looping around
> >>
> >>    xfsaild/dm-10-21143 [001] ...2 17878.555133: xfs_ilock_nowait: dev
> >> 253:10 ino 0x0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> >
> > Looks like either a stale inode (which should never reach the AIL)
> > or it's an inode that's been reclaimed and this is a use after free
> > situation. Given that we are failing IOs here, I'd suggest it's more
> > likely to be an IO failure that's caused a writeback problem, not an
> > interaction with stale inodes.
> >
> > So, look at xfs_iflush. If an IO fails, it is supposed to unlock the
> > inode by calling xfs_iflush_abort(), which will also remove it from
> > the AIL. This can also happen on reclaim of a dirty inode, and if so
> > we'll still reclaim the inode because reclaim assumes xfs_iflush()
> > cleans up properly.
> >
> > Which, apparently, it doesn't:
> >
> >         /*
> >          * Get the buffer containing the on-disk inode.
> >          */
> >         error = xfs_imap_to_bp(mp, NULL, &ip->i_imap, &dip, &bp,
XBF_TRYLOCK, 0);
> >         if (error || !bp) {
> >                 xfs_ifunlock(ip);
> >                 return error;
> >         }
> >
> > This looks like a bug - xfs_iflush hasn't aborted the inode
> > writeback on failure - it's just unlocked the flush lock. Hence it
> > has left the inode dirty in the AIL, and then the inode has probably
> > then been reclaimed, setting the inode number to zero.
> In our case, we do not reach this call, because XFS is already marked
> as "shutdown", so in our case we do:
>     /*
>      * This may have been unpinned because the filesystem is shutting
>      * down forcibly. If that's the case we must not write this inode
>      * to disk, because the log record didn't make it to disk.
>      *
>      * We also have to remove the log item from the AIL in this case,
>      * as we wait for an empty AIL as part of the unmount process.
>      */
>     if (XFS_FORCED_SHUTDOWN(mp)) {
>         error = -EIO;
>         goto abort_out;
>     }
>
> So we call xfs_iflush_abort, but due to "iip" being NULL (as Shyam
> mentioned earlier in this thread), we proceed directly to
> xfs_ifunlock(ip), which now becomes the same situation as you
> described above.

If you are getting this occuring, something else has already gone
wrong as you can't have a dirty inode without a log item attached to
it. So it appears to me that you are reporting a symptom of a
problem after it has occured, not the root cause. Maybe it is the
same root cause, maybe not. Either way, it doesn't help us solve any
problem.

> The comment clearly says "We also have to remove the log item from the
> AIL in this case, as we wait for an empty AIL as part of the unmount
> process." Could you perhaps point us at the code that is supposed to
> remove the log item from the AIL? Apparently this is not happening.

xfs_iflush_abort or xfs_iflush_done does that work.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>