xfs
[Top] [All Lists]

Re: [PATCH 3/5] repair: phase 6 is trivially parallelisable

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: [PATCH 3/5] repair: phase 6 is trivially parallelisable
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 13 Dec 2013 07:53:12 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20131212184346.GA23479@xxxxxxxxxxxxx>
References: <1386832945-19763-1-git-send-email-david@xxxxxxxxxxxxx> <1386832945-19763-4-git-send-email-david@xxxxxxxxxxxxx> <20131212184346.GA23479@xxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Dec 12, 2013 at 10:43:46AM -0800, Christoph Hellwig wrote:
> >  static void
> >  add_dotdot_update(
> > @@ -64,12 +65,14 @@ add_dotdot_update(
> >             do_error(_("malloc failed add_dotdot_update (%zu bytes)\n"),
> >                     sizeof(dotdot_update_t));
> >  
> > +   pthread_mutex_lock(&dotdot_lock);
> >     dir->next = dotdot_update_list;
> >     dir->irec = irec;
> >     dir->agno = agno;
> >     dir->ino_offset = ino_offset;
> >  
> >     dotdot_update_list = dir;
> > +   pthread_mutex_unlock(&dotdot_lock);
> 
> Would be nice to make this use a list_head if you touch it anyway.
> (As a separate patch)
> 
> >  static void
> >  traverse_ags(
> > -   xfs_mount_t             *mp)
> > +   xfs_mount_t             *mp)
> 
> Not quite sure what actually changed here, but if you touch it anyway
> you might as well use the struct version..

Whitespace after xfs_mount_t, judging by the highlighting I see in
the editor right now.

> > +   if (!ag_stride) {
> > +           work_queue_t    queue;
> > +
> > +           queue.mp = mp;
> > +           pf_args[0] = start_inode_prefetch(0, 1, NULL);
> > +           for (i = 0; i < glob_agcount; i++) {
> > +                   pf_args[(~i) & 1] = start_inode_prefetch(i + 1, 1,
> > +                                   pf_args[i & 1]);
> > +                   traverse_function(&queue, i, pf_args[i & 1]);
> > +           }
> > +           return;
> >     }
> > +
> > +   /*
> > +    * create one worker thread for each segment of the volume
> > +    */
> > +   queues = malloc(thread_count * sizeof(work_queue_t));
> > +   for (i = 0, agno = 0; i < thread_count; i++) {
> > +           create_work_queue(&queues[i], mp, 1);
> > +           pf_args[0] = NULL;
> > +           for (j = 0; j < ag_stride && agno < glob_agcount; j++, agno++) {
> > +                   pf_args[0] = start_inode_prefetch(agno, 1, pf_args[0]);
> > +                   queue_work(&queues[i], traverse_function, agno,
> > +                              pf_args[0]);
> > +           }
> > +   }
> > +
> > +   /*
> > +    * wait for workers to complete
> > +    */
> > +   for (i = 0; i < thread_count; i++)
> > +           destroy_work_queue(&queues[i]);
> > +   free(queues);
> 
> 
> This is the third copy of this code block, might make sense to
> consolidate it.

Agreed, just haven't got to it.

> Btw, does anyone remember why we have the libxfs_bcache_overflowed()
> special case in phase4, but not anywhere else?

I recall something about memory consumption, but I doubt that code
can even trigger given that if we get to overflow conditions we
immediately double the cache size and so libxfs_bcache_overflowed()
will never see an overflow condition....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>