Am 10.10.2011 07:55, schrieb Markus Trippelsdorf:
Wouldn't it be possible to verify that the problem also goes away with
this simple one liner?
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index 2366c54..daf30c9 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -1654,7 +1654,7 @@ xfs_init_workqueues(void)
if (!xfs_syncd_wq)
goto out;
- xfs_ail_wq = alloc_workqueue("xfsail", WQ_CPU_INTENSIVE, 8);
+ xfs_ail_wq = alloc_workqueue("xfsail", WQ_HIGHPRI | WQ_CPU_INTENSIVE,
8);
if (!xfs_ail_wq)
goto out_destroy_syncd;
From Documentation/workqueue.txt:
WQ_HIGHPRI | WQ_CPU_INTENSIVE
This combination makes the wq avoid interaction with
concurrency management completely and behave as a simple
per-CPU execution context provider. Work items queued on a
highpri CPU-intensive wq start execution as soon as resources
are available and don't affect execution of other work items.
So this should be identical to reverting back to the kthread. No?
CCing Tejun, maybe he can comment on this?
We already tested this patch and it still fails / deadlocks:
diff --git a/fs/xfs/linux-2.6/xfs_super.c b/fs/xfs/linux-2.6/xfs_super.c
index a1a881e..6377f51 100644
--- a/fs/xfs/linux-2.6/xfs_super.c
+++ b/fs/xfs/linux-2.6/xfs_super.c
@@ -1669,7 +1669,7 @@ xfs_init_workqueues(void)
if (!xfs_syncd_wq)
goto out;
- xfs_ail_wq = alloc_workqueue("xfsail", WQ_CPU_INTENSIVE, 8);
+ xfs_ail_wq = alloc_workqueue("xfsail", WQ_MEM_RECLAIM | WQ_HIGHPRI,
512);
if (!xfs_ail_wq)
goto out_destroy_syncd;
diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c
index 953b142..638ea8b 100644
--- a/fs/xfs/xfs_trans_ail.c
+++ b/fs/xfs/xfs_trans_ail.c
@@ -600,7 +600,7 @@ out_done:
}
/* There is more to do, requeue us. */
- queue_delayed_work(xfs_syncd_wq, &ailp->xa_work,
+ queue_delayed_work(xfs_ail_wq, &ailp->xa_work,
msecs_to_jiffies(tout));
}
@@ -637,7 +637,7 @@ xfs_ail_push(
smp_wmb();
xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
if (!test_and_set_bit(XFS_AIL_PUSHING_BIT, &ailp->xa_flags))
- queue_delayed_work(xfs_syncd_wq, &ailp->xa_work, 0);
+ queue_delayed_work(xfs_ail_wq, &ailp->xa_work, 0);
}
/*
Stefan
|