[PATCH 2/2] xfs: mark the xfs-alloc workqueue as high priority
Eric Sandeen
sandeen at sandeen.net
Fri Jan 9 12:12:04 CST 2015
I had a case reported where a system under high stress
got deadlocked. A btree split was handed off to the xfs
allocation workqueue, and it is holding the xfs_ilock
exclusively. However, other xfs_end_io workers are
not running, because they are waiting for that lock.
As a result, the xfs allocation workqueue never gets
run, and everything grinds to a halt.
To be honest, it's not clear to me how the workqueue
subsystem manages this sort of thing. But in testing,
making the allocation workqueue high priority so that
it gets added to the front of the pending work list,
resolves the problem. We did similar things for
the xfs-log workqueues, for similar reasons.
Signed-off-by: Eric Sandeen <sandeen at redhat.com>
---
(slight whitespace shift is to fit in 80 cols, sorry!)
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index e5bdca9..9c549e1 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -874,7 +874,7 @@ xfs_init_mount_workqueues(
goto out_destroy_log;
mp->m_alloc_workqueue = alloc_workqueue("xfs-alloc/%s",
- WQ_MEM_RECLAIM|WQ_FREEZABLE, 0, mp->m_fsname);
+ WQ_MEM_RECLAIM|WQ_FREEZABLE|WQ_HIGHPRI, 0, mp->m_fsname);
if (!mp->m_alloc_workqueue)
goto out_destroy_eofblocks;
More information about the xfs
mailing list