xfs
[Top] [All Lists]

Re: [PATCH] Remove l_flushsema

To: David Chinner <dgc@xxxxxxx>
Subject: Re: [PATCH] Remove l_flushsema
From: Matthew Wilcox <matthew@xxxxxx>
Date: Wed, 30 Apr 2008 05:52:53 -0600
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, linux-fsdevel@xxxxxxxxxxxxxxx
In-reply-to: <20080430111154.GO108924158@xxxxxxx>
References: <20080430090502.GH14976@xxxxxxxxxxxxxxxx> <20080430104125.GM108924158@xxxxxxx> <20080430105832.GA20442@xxxxxxxxxxxxx> <20080430111154.GO108924158@xxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
On Wed, Apr 30, 2008 at 09:11:54PM +1000, David Chinner wrote:
> On Wed, Apr 30, 2008 at 06:58:32AM -0400, Christoph Hellwig wrote:
> > On Wed, Apr 30, 2008 at 08:41:25PM +1000, David Chinner wrote:
> > > The only thing that I'm concerned about here is that this will
> > > substantially increase the time the l_icloglock is held. This is
> > > a severely contended lock on large cpu count machines and putting
> > > the wakeup inside this lock will increase the hold time.
> > > 
> > > I guess I can address this by adding a new lock for the waitqueue
> > > in a separate patch set.
> > 
> > waitqueues are loked internally and don't need synchronization.  With
> > a little bit of re-arranging the code the wake_up could probably be
> > moved out of the critical section.
> 
> Yeah, I just realised that myself and was about to reply as such....
> 
> I'll move the wakeup outside the lock.

I can't tell whether this race matters ... probably not:

N processes come in and queue up waiting for the flush
xlog_state_do_callback() is called
it unlocks the spinlock
a new task comes in and takes the spinlock
wakeups happen

ie do we care about 'fairness' here, or is it OK for a new task to jump
the queue?

-- 
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."


<Prev in Thread] Current Thread [Next in Thread>