[PATCH 2/9] xfs: introduce a xfssyncd workqueue
Dave Chinner
david at fromorbit.com
Thu Apr 7 19:41:11 CDT 2011
On Thu, Apr 07, 2011 at 04:34:53PM -0500, Alex Elder wrote:
> On Thu, 2011-04-07 at 11:57 +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner at redhat.com>
> >
> > All of the work xfssyncd does is background functionality. There is
> > no need for a thread per filesystem to do this work - it can al be
> > managed by a global workqueue now they manage concurrency
> > effectively.
> >
> > Introduce a new gglobal xfssyncd workqueue, and convert the periodic
> > work to use this new functionality. To do this, use a delayed work
> > construct to schedule the next running of the periodic sync work
> > for the filesystem. When the sync work is complete, queue a new
> > delayed work for the next running of the sync work.
> >
> > For laptop mode, we wait on completion for the sync works, so ensure
> > that the sync work queuing interface can flush and wait for work to
> > complete to enable the work queue infrastructure to replace the
> > current sequence number and wakeup that is used.
> >
> > Because the sync work does non-trivial amounts of work, mark the
> > new work queue as CPU intensive.
>
> (I've now seen your next patch so my confusion is I
> think resolved. I'm sending the following as I originally
> wrote it anyway.)
>
> I have two comments below. One is something that can be
> fixed later and another I think may be a problem. I also
> was just a little confused about something.
>
> The confusing thing is that you still are spawning a kernel
> thread per filesystem in xfs_syncd_init(), which is still
> waiting xfs_syncd_centisecs between runs, and which is
> then running work queued on the mount point's m_sync_list.
>
> I *think* the reason it's confusing is just that your
> description talks about "all of the work xfssyncd does,"
".. is background functionality." The rest of the patch description
talks only about introducing the workqueue and converting a single
operation to use it:
> while this patch just pulls out the data syncing portion
> of what it does.
>
> The patch preserves the ability to make
> use of the per-FS periodic syncer thread to flush inodes
> (via xfs_flush_inodes()).
Right - that's converted in the next patch, and the unused syncd
thread is removed.
>
> In any case, with the exception of the timeout thing
> below (which ought to be easy to fix) the code looks
> correct to me. It just took me a little while to
> reconcile the what the delayed workqueues (named
> "xfssyncd") do, versus what the xfssyncd" threads
> that remain do.
>
> Despite the above, you can consider this reviewed by me.
>
> Reviewed-by: Alex Elder <aelder at sgi.com>
>
> > Signed-off-by: Dave Chinner <dchinner at redhat.com>
> > Reviewed-by: Christoph Hellwig <hch at lst.de>
> > ---
> > fs/xfs/linux-2.6/xfs_super.c | 24 +++++------
> > fs/xfs/linux-2.6/xfs_sync.c | 86 ++++++++++++++++++++---------------------
> > fs/xfs/linux-2.6/xfs_sync.h | 2 +
> > fs/xfs/xfs_mount.h | 4 +-
> > 4 files changed, 56 insertions(+), 60 deletions(-)
> >
> > diff --git a/fs/xfs/linux-2.6/xfs_super.c b/fs/xfs/linux-2.6/xfs_super.c
> > index 1ba5c45..99dded9 100644
> > --- a/fs/xfs/linux-2.6/xfs_super.c
> > +++ b/fs/xfs/linux-2.6/xfs_super.c
>
> . . .
>
> > @@ -1833,13 +1822,21 @@ init_xfs_fs(void)
> > if (error)
> > goto out_cleanup_procfs;
> >
> > + xfs_syncd_wq = alloc_workqueue("xfssyncd", WQ_CPU_INTENSIVE, 8);
>
> The value (8) for max_active here is arbitrary, and maybe
> justified with some magic words in a comment or something.
> But I really think it should be configurable, I suppose
> via a module parameter, for the benefit of unusual (i.e.
> large) configurations.
I'll add a comment. FYI, it's a per-cpu number, not a global number.
More information about the xfs
mailing list