| To: | David Chinner <dgc@xxxxxxx> |
|---|---|
| Subject: | Re: Review: Reduce in-core superblock lock contention near ENOSPC |
| From: | Lachlan McIlroy <lachlan@xxxxxxx> |
| Date: | Thu, 30 Nov 2006 18:03:40 +0000 |
| Cc: | xfs-dev@xxxxxxx, xfs@xxxxxxxxxxx |
| In-reply-to: | <20061123044122.GU11034@xxxxxxxxxxxxxxxxx> |
| Organization: | SGI |
| References: | <20061123044122.GU11034@xxxxxxxxxxxxxxxxx> |
| Reply-to: | lachlan@xxxxxxx |
| Sender: | xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.7.12) Gecko/20050920 |
Dave,
Could you have changed the SB_LOCK from a spinlock to a blocking
mutex and have achieved a similar effect?
Has this change had much testing on a large machine?
These changes wouldn't apply cleanly to tot (3 hunks failed in
xfs_mount.c) but I couldn't see why.
The changes look fine to me, couple of comments below.
Lachlan
@@ -1479,9 +1479,11 @@ xfs_mod_incore_sb_batch(xfs_mount_t *mp,
case XFS_SBS_IFREE:
case XFS_SBS_FDBLOCKS:
if (!(mp->m_flags & XFS_MOUNT_NO_PERCPU_SB)) {
- status = xfs_icsb_modify_counters_locked(mp,
+ XFS_SB_UNLOCK(mp, s);
+ status = xfs_icsb_modify_counters(mp,
msbp->msb_field,
msbp->msb_delta, rsvd);
+ s = XFS_SB_LOCK(mp);
break;
}
/* FALLTHROUGH */
Is it safe to be releasing the SB_LOCK? Is it assumed that the
superblock wont change while we process the list of xfs_mod_sb
structures?
@@ -1515,11 +1517,12 @@ xfs_mod_incore_sb_batch(xfs_mount_t *mp,
case XFS_SBS_IFREE:
case XFS_SBS_FDBLOCKS:
if (!(mp->m_flags & XFS_MOUNT_NO_PERCPU_SB)) {
- status =
- xfs_icsb_modify_counters_locked(mp,
+ XFS_SB_UNLOCK(mp, s);
+ status = xfs_icsb_modify_counters(mp,
msbp->msb_field,
-(msbp->msb_delta),
rsvd);
+ s = XFS_SB_LOCK(mp);
break;
}
/* FALLTHROUGH */
Same as above.
@@ -1882,6 +1895,17 @@ xfs_icsb_disable_counter(
ASSERT((field >= XFS_SBS_ICOUNT) && (field <= XFS_SBS_FDBLOCKS));
+ /*
+ * If we are already disabled, then there is nothing to do
+ * here. We check before locking all the counters to avoid
+ * the expensive lock operation when being called in the
+ * slow path and the counter is already disabled. This is
+ * safe because the only time we set or clear this state is under
+ * the m_icsb_mutex.
+ */
+ if (xfs_icsb_counter_disabled(mp, field))
+ return 0;
+
xfs_icsb_lock_all_counters(mp);
if (!test_and_set_bit(field, &mp->m_icsb_counters)) {
/* drain back to superblock */
Nice one, that will avoid a lot of unnecessary work.
|
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH] libattr 2.4.32 arm eabi system call calling convention, Christoph Hellwig |
|---|---|
| Next by Date: | Re: inode64 workaround, Deanan |
| Previous by Thread: | Review: Reduce in-core superblock lock contention near ENOSPC, David Chinner |
| Next by Thread: | Re: Review: Reduce in-core superblock lock contention near ENOSPC, David Chinner |
| Indexes: | [Date] [Thread] [Top] [All Lists] |