xfs
[Top] [All Lists]

[PATCH] xfs: reduce lock traffic on incore sb lock

To: xfs@xxxxxxxxxxx
Subject: [PATCH] xfs: reduce lock traffic on incore sb lock
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 29 Sep 2010 10:51:40 +1000
From: Dave Chinner <dchinner@xxxxxxxxxx>

Under heavy parallel unlink workloads, the incore superblock lock is
heavily trafficed in xfs_mod_incore_sb_batch(). This is despite the
fact that the counters being modified are typically the counters
that are per-cpu and do not require the lock. IOWs, we are locking
and unlocking the superblock lock needlessly, and the result is that
it is third most heavily contended lock in the system under these
workloads.

Fix this by only locking the superblock lock when we are modifying a
counter protected by it. This completely removes the m_sb_lock from
lock_stat traces during create/remove workloads.

Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
---
 fs/xfs/xfs_mount.c |   33 ++++++++++++++++++++++++---------
 1 files changed, 24 insertions(+), 9 deletions(-)

diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index 396d324..adc4ab9 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -1883,21 +1883,23 @@ xfs_mod_incore_sb(
  * Either all of the specified deltas will be applied or none of
  * them will.  If any modified field dips below 0, then all modifications
  * will be backed out and EINVAL will be returned.
+ *
+ * The @m_sb_lock is be taken and dropped on demand according to the type of
+ * counter being modified to minimise lock traffic as this can be a very hot
+ * lock.
  */
 int
 xfs_mod_incore_sb_batch(xfs_mount_t *mp, xfs_mod_sb_t *msb, uint nmsb, int 
rsvd)
 {
        int             status=0;
        xfs_mod_sb_t    *msbp;
+       int             locked = 0;
 
        /*
         * Loop through the array of mod structures and apply each
         * individually.  If any fail, then back out all those
-        * which have already been applied.  Do all of this within
-        * the scope of the m_sb_lock so that all of the changes will
-        * be atomic.
+        * which have already been applied.
         */
-       spin_lock(&mp->m_sb_lock);
        msbp = &msb[0];
        for (msbp = &msbp[0]; msbp < (msb + nmsb); msbp++) {
                /*
@@ -1911,16 +1913,22 @@ xfs_mod_incore_sb_batch(xfs_mount_t *mp, xfs_mod_sb_t 
*msb, uint nmsb, int rsvd)
                case XFS_SBS_IFREE:
                case XFS_SBS_FDBLOCKS:
                        if (!(mp->m_flags & XFS_MOUNT_NO_PERCPU_SB)) {
-                               spin_unlock(&mp->m_sb_lock);
+                               if (locked) {
+                                       locked = 0;
+                                       spin_unlock(&mp->m_sb_lock);
+                               }
                                status = xfs_icsb_modify_counters(mp,
                                                        msbp->msb_field,
                                                        msbp->msb_delta, rsvd);
-                               spin_lock(&mp->m_sb_lock);
                                break;
                        }
                        /* FALLTHROUGH */
 #endif
                default:
+                       if (!locked) {
+                               spin_lock(&mp->m_sb_lock);
+                               locked = 1;
+                       }
                        status = xfs_mod_incore_sb_unlocked(mp,
                                                msbp->msb_field,
                                                msbp->msb_delta, rsvd);
@@ -1949,17 +1957,23 @@ xfs_mod_incore_sb_batch(xfs_mount_t *mp, xfs_mod_sb_t 
*msb, uint nmsb, int rsvd)
                        case XFS_SBS_IFREE:
                        case XFS_SBS_FDBLOCKS:
                                if (!(mp->m_flags & XFS_MOUNT_NO_PERCPU_SB)) {
-                                       spin_unlock(&mp->m_sb_lock);
+                                       if (locked) {
+                                               locked = 0;
+                                               spin_unlock(&mp->m_sb_lock);
+                                       }
                                        status = xfs_icsb_modify_counters(mp,
                                                        msbp->msb_field,
                                                        -(msbp->msb_delta),
                                                        rsvd);
-                                       spin_lock(&mp->m_sb_lock);
                                        break;
                                }
                                /* FALLTHROUGH */
 #endif
                        default:
+                               if (!locked) {
+                                       spin_lock(&mp->m_sb_lock);
+                                       locked = 1;
+                               }
                                status = xfs_mod_incore_sb_unlocked(mp,
                                                        msbp->msb_field,
                                                        -(msbp->msb_delta),
@@ -1970,7 +1984,8 @@ xfs_mod_incore_sb_batch(xfs_mount_t *mp, xfs_mod_sb_t 
*msb, uint nmsb, int rsvd)
                        msbp--;
                }
        }
-       spin_unlock(&mp->m_sb_lock);
+       if (locked)
+               spin_unlock(&mp->m_sb_lock);
        return status;
 }
 
-- 
1.7.1

<Prev in Thread] Current Thread [Next in Thread>