xfs
[Top] [All Lists]

Re: 2.6.30 panic - xfs_fs_destroy_inode

To: Patrick Schreurs <patrick@xxxxxxxxxxxxxxxx>
Subject: Re: 2.6.30 panic - xfs_fs_destroy_inode
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Tue, 21 Jul 2009 10:12:25 -0400
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx, Tommy van Leeuwen <tommy@xxxxxxxxxxxxxxxx>, Lachlan McIlroy <lmcilroy@xxxxxxxxxx>, Eric Sandeen <sandeen@xxxxxxxxxxx>
In-reply-to: <4A4CEEF2.7040101@xxxxxxxxxxxxxxxx>
References: <4A408316.2070903@xxxxxxxxxxxxxxxx> <1587994907.388291245745033392.JavaMail.root@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090623171305.GB23971@xxxxxxxxxxxxx> <4A4A7205.6010101@xxxxxxxxxxxxxxxx> <20090701124441.GA12844@xxxxxxxxxxxxx> <4A4CEEF2.7040101@xxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.18 (2008-05-17)
On Thu, Jul 02, 2009 at 07:31:30PM +0200, Patrick Schreurs wrote:
> Hi Christoph,
>
> With this patch we see the following:
>
> kernel BUG at fs/inode.c:1288!

Okay, I think I figured out what this is.  You hit the case where
we re-use an inode that is gone from the VFS point of view, but
still in xfs reclaimable state.  We reinitialize it using
inode_init_always, but inode_init_always does not touch i_state, which
still includes I_CLEAR.  See the patch below which sets it to the
expected state.  What really worries me is that I don't seem to be
able to actually hit that case in testing.

Can you try the patch below ontop of the previous one?


Index: linux-2.6/fs/xfs/xfs_iget.c
===================================================================
--- linux-2.6.orig/fs/xfs/xfs_iget.c    2009-07-21 16:07:41.654923213 +0200
+++ linux-2.6/fs/xfs/xfs_iget.c 2009-07-21 16:08:55.064151137 +0200
@@ -206,6 +206,7 @@ xfs_iget_cache_hit(
                        error = ENOMEM;
                        goto out_error;
                }
+               inode->i_state = I_LOCK|I_NEW;
        } else {
                /* If the VFS inode is being torn down, pause and try again. */
                if (!igrab(inode))

<Prev in Thread] Current Thread [Next in Thread>