xfs
[Top] [All Lists]

[PATCH 3/5] xfs: asserting lock not held during freeing not valid

To: xfs@xxxxxxxxxxx
Subject: [PATCH 3/5] xfs: asserting lock not held during freeing not valid
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 24 Sep 2013 16:01:14 +1000
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <1380002476-18839-1-git-send-email-david@xxxxxxxxxxxxx>
References: <1380002476-18839-1-git-send-email-david@xxxxxxxxxxxxx>
From: Dave Chinner <dchinner@xxxxxxxxxx>

When we free an inode, we do so via RCU. As an RCU lookup can occur
at any time before we free an inode, and that lookup takes the inode
flags lock, we cannot safely assert that the flags lock is not held
just before marking it dead and running call_rcu() to free the
inode.

We check on allocation of a new inode structre that the lock is not
held, so we still have protection against locks being leaked and
hence not correctly initialised when allocated out of the slab.
Hence just remove the assert...

Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
---
 fs/xfs/xfs_icache.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 193206b..474807a 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -119,11 +119,6 @@ xfs_inode_free(
                ip->i_itemp = NULL;
        }
 
-       /* asserts to verify all state is correct here */
-       ASSERT(atomic_read(&ip->i_pincount) == 0);
-       ASSERT(!spin_is_locked(&ip->i_flags_lock));
-       ASSERT(!xfs_isiflocked(ip));
-
        /*
         * Because we use RCU freeing we need to ensure the inode always
         * appears to be reclaimed with an invalid inode number when in the
@@ -135,6 +130,10 @@ xfs_inode_free(
        ip->i_ino = 0;
        spin_unlock(&ip->i_flags_lock);
 
+       /* asserts to verify all state is correct here */
+       ASSERT(atomic_read(&ip->i_pincount) == 0);
+       ASSERT(!xfs_isiflocked(ip));
+
        call_rcu(&VFS_I(ip)->i_rcu, xfs_inode_free_callback);
 }
 
-- 
1.8.3.2

<Prev in Thread] Current Thread [Next in Thread>