xfs
[Top] [All Lists]

[PATCH 09/10] xfs: remove MS_ACTIVE guard from inode reclaim work

To: xfs@xxxxxxxxxxx
Subject: [PATCH 09/10] xfs: remove MS_ACTIVE guard from inode reclaim work
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 7 Mar 2012 15:50:27 +1100
In-reply-to: <1331095828-28742-1-git-send-email-david@xxxxxxxxxxxxx>
References: <1331095828-28742-1-git-send-email-david@xxxxxxxxxxxxx>
From: Dave Chinner <dchinner@xxxxxxxxxx>

We need to be able to queue inode reclaim work during the mount
process as quotacheck can cause large amounts of inodes to be read
and we need to clean them up periodically as the shrinkers can not
run until after the mount process has completed.

The reclaim work is currently protected from running during the
unmount process by a check against MS_ACTIVE. Unfortunately, this
also means that the relcaim work cannot run during mount.  The
unmount process should stop the reclaim cleanly before freeing
anything that the reclaim work depends on, so there is no need to
have this guard in place.

Also, the inode reclaim work is demand driven, so ther eis no need
to start it immediately during mount. It will be started the moment
an inode is queued for reclaim, so qutoacheck will trigger it just
fine.

Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
---
 fs/xfs/xfs_super.c |    3 +--
 fs/xfs/xfs_sync.c  |   27 ++++++++++++++++-----------
 2 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index 150d8f4..b1df512 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -968,8 +968,6 @@ xfs_fs_put_super(
 {
        struct xfs_mount        *mp = XFS_M(sb);
 
-       xfs_syncd_stop(mp);
-
        /*
         * Blow away any referenced inode in the filestreams cache.
         * This can and will cause log traffic as inodes go inactive
@@ -980,6 +978,7 @@ xfs_fs_put_super(
        xfs_flush_buftarg(mp->m_ddev_targp, 1);
 
        xfs_unmountfs(mp);
+       xfs_syncd_stop(mp);
        xfs_freesb(mp);
        xfs_icsb_destroy_counters(mp);
        xfs_close_devices(mp);
diff --git a/fs/xfs/xfs_sync.c b/fs/xfs/xfs_sync.c
index 71bf846..08967e9 100644
--- a/fs/xfs/xfs_sync.c
+++ b/fs/xfs/xfs_sync.c
@@ -496,7 +496,15 @@ xfs_sync_worker(
                                        struct xfs_mount, m_sync_work);
        int             error;
 
-       if (!(mp->m_flags & XFS_MOUNT_RDONLY)) {
+       /*
+        * We shouldn't write/force the log if we are in the mount/unmount
+        * process or on a read only filesystem. The workqueue still needs to be
+        * active in both cases, however, because it is used for inode reclaim
+        * during these times. hence use the MS_ACTIVE flag to avoid doing
+        * anything in these periods.
+        */
+       if (!(mp->m_super->s_flags & MS_ACTIVE) &&
+           !(mp->m_flags & XFS_MOUNT_RDONLY)) {
                /* dgc: errors ignored here */
                if (mp->m_super->s_frozen == SB_UNFROZEN &&
                    xfs_log_need_covered(mp))
@@ -524,14 +532,6 @@ xfs_syncd_queue_reclaim(
        struct xfs_mount        *mp)
 {
 
-       /*
-        * We can have inodes enter reclaim after we've shut down the syncd
-        * workqueue during unmount, so don't allow reclaim work to be queued
-        * during unmount.
-        */
-       if (!(mp->m_super->s_flags & MS_ACTIVE))
-               return;
-
        rcu_read_lock();
        if (radix_tree_tagged(&mp->m_perag_tree, XFS_ICI_RECLAIM_TAG)) {
                queue_delayed_work(xfs_syncd_wq, &mp->m_reclaim_work,
@@ -600,7 +600,6 @@ xfs_syncd_init(
        INIT_DELAYED_WORK(&mp->m_reclaim_work, xfs_reclaim_worker);
 
        xfs_syncd_queue_sync(mp);
-       xfs_syncd_queue_reclaim(mp);
 
        return 0;
 }
@@ -610,7 +609,13 @@ xfs_syncd_stop(
        struct xfs_mount        *mp)
 {
        cancel_delayed_work_sync(&mp->m_sync_work);
-       cancel_delayed_work_sync(&mp->m_reclaim_work);
+
+       /*
+        * we flush any pending inode reclaim work rather than cancel it here.
+        * This ensures that there are no clean inodes queued during unmount
+        * left unreclaimed when we return.
+        */
+       flush_delayed_work_sync(&mp->m_reclaim_work);
        cancel_work_sync(&mp->m_flush_work);
 }
 
-- 
1.7.9

<Prev in Thread] Current Thread [Next in Thread>