[Top] [All Lists]

Re: [PATCH] xfs: Fix WARN_ON(delalloc) in xfs_vm_releasepage()

To: Jan Kara <jack@xxxxxxx>
Subject: Re: [PATCH] xfs: Fix WARN_ON(delalloc) in xfs_vm_releasepage()
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 19 Mar 2013 14:47:59 +1100
Cc: Ben Myers <bpm@xxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130318160552.GC28508@xxxxxxxxxxxxx>
References: <1363267854-25602-1-git-send-email-jack@xxxxxxx> <20130315205214.GB22182@xxxxxxx> <20130318160552.GC28508@xxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, Mar 18, 2013 at 05:05:52PM +0100, Jan Kara wrote:
> On Fri 15-03-13 15:52:14, Ben Myers wrote:
> > Hi Jan,
> > 
> > On Thu, Mar 14, 2013 at 02:30:54PM +0100, Jan Kara wrote:
> > > When a dirty page is truncated from a file but reclaim gets to it before
> > > truncate_inode_pages(), we hit WARN_ON(delalloc) in
> > > xfs_vm_releasepage(). This is because reclaim tries to write the page,
> > > xfs_vm_writepage() just bails out (leaving page clean) and thus reclaim
> > > thinks it can continue and calls xfs_vm_releasepage() on page with dirty
> > > buffers.
> > > 
> > > Fix the issue by redirtying the page in xfs_vm_writepage(). This makes
> > > reclaim stop reclaiming the page and also logically it keeps page in a
> > > more consistent state where page with dirty buffers has PageDirty set.
> > 
> > Was there an easy way to reproduce this?  I'm testing and reviewing this now
> > and it might help.
>   I used scripts/run-bash-shared-mapping.sh from the attached tarball - it
> fires up several processes beating a file with mmap accesses while
> truncating the file and memory stressing the machine. I presume fsx with
> some memhog could trigger the issue as well.

I have seen fsx trip this occasionally. The problem is that it was
never reliable because of the fact that other memory pressure had to
be generated at the same time....

Is the above script something you could turn into an xfstest (I
haven't looked at the script yet)?


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>