On 2011.06.21 at 20:57 +0200, Markus Trippelsdorf wrote:
> On 2011.06.21 at 20:24 +0200, Markus Trippelsdorf wrote:
> > On 2011.06.21 at 10:02 +0200, Markus Trippelsdorf wrote:
> > > On 2011.06.21 at 14:25 +1000, Dave Chinner wrote:
> > > > That is, you really need to get a profile of the rm -rf process - or
> > > > whatever is consuming the CPU time - (e.g. via perf or ftrace)
> > > > across the hang to so we can narrow down the potential cause of the
> > > > latency. Speaking of which, latencytop might be helpful in
> > > > identifying where input is getting held up....
> > >
> > > I've recorded a profile with "perf record -g /home/markus/rm_sync"
> > > ~ % cat rm_sync
> > > rm -fr /mnt/tmp/tmp/linux && sync
> >
> > FWIW here are two links to svg time-charts produced by:
> >
> > perf timechart record /home/markus/rm_sync
> >
> > http://trippelsdorf.de/timechart1.svg
> > http://trippelsdorf.de/timechart2.svg
> >
>
> And this is what the mysterious kworker is doing during the sync.
> It's the one consuming most of the CPU time.
>
> 39.96% kworker/3:0 [kernel.kallsyms]
> 0xffffffff811da9da k [k] xfs_trans_ail_update_bulk
> |
> --- xfs_trans_ail_update_bulk
> xfs_trans_committed_bulk
> xlog_cil_committed
> xlog_state_do_callback
> xlog_state_done_syncing
> xlog_iodone
> xfs_buf_iodone_work
> process_one_work
> worker_thread
> kthread
> kernel_thread_helper
>
The following patch fixes the problem for me.
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 5e68099..2f34efd 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1856,7 +1856,7 @@ xfs_buf_init(void)
goto out;
xfslogd_workqueue = alloc_workqueue("xfslogd",
- WQ_MEM_RECLAIM | WQ_HIGHPRI, 1);
+ WQ_MEM_RECLAIM | WQ_UNBOUND, 1);
if (!xfslogd_workqueue)
goto out_free_buf_zone;
--
Markus
|