xfs
[Top] [All Lists]

Re: [PATCH 4/6] Replace inode flush semaphore with a completion

To: Daniel Walker <dwalker@xxxxxxxxxx>
Subject: Re: [PATCH 4/6] Replace inode flush semaphore with a completion
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 14 Aug 2008 10:19:38 +1000
Cc: xfs@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, matthew@xxxxxx
In-reply-to: <1218641641.6166.32.camel@xxxxxxxxxxxxxxxxx>
Mail-followup-to: Daniel Walker <dwalker@xxxxxxxxxx>, xfs@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, matthew@xxxxxx
References: <1214556284-4160-1-git-send-email-david@xxxxxxxxxxxxx> <1214556284-4160-5-git-send-email-david@xxxxxxxxxxxxx> <1218597077.6166.15.camel@xxxxxxxxxxxxxxxxx> <20080813075057.GZ6119@disturbed> <1218641641.6166.32.camel@xxxxxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Wed, Aug 13, 2008 at 08:34:01AM -0700, Daniel Walker wrote:
> On Wed, 2008-08-13 at 17:50 +1000, Dave Chinner wrote:
> 
> > Right now we have the case where no matter what type of flush
> > is done, the caller does not have to worry about unlocking
> > the flush lock - it will be done as part of the flush. You're
> > suggestion makes that conditional based on whether we did a
> > sync flush or not.
> > 
> > So, what happenѕ when you call:
> > 
> > xfs_iflush(ip, XFS_IFLUSH_DELWRI_ELSE_SYNC);
> > 
> > i.e. xfs_iflush() may do an delayed flush or a sync flush depending
> > on the current state of the inode. The caller has no idea what type
> > of flush was done, so will have no idea whether to unlock or not.
> 
> You wouldn't base the unlock on what iflush does, you would
> unconditionally unlock.

It's not really a flush lock at that point - it's a state lock.
We've already got one of those, and a set of state flags that it
protects.

Basically you're suggesting that we keep external state to the
completion that tracks whether a completion is in progress
or not. You can't use a mutex like you suggested to protect
state because you can't hold it while doing a wait_for_completion()
and then use it to clear the state flag before calling complete().
We can use the internal inode state flags and lock to keep
track of this. i.e:

xfs_iflock(
        xfs_inode_t     *ip)
{
        xfs_iflags_set(ip, XFS_IFLUSH_INPROGRESS);
        wait_for_completion(ip->i_flush_wq);
}

xfs_iflock_nowait(
        xfs_inode_t     *ip)
{
        if (xfs_iflags_test(ip, XFS_IFLUSH_INPROGRESS))
                return 1;
        xfs_iflags_set(ip, XFS_IFLUSH_INPROGRESS);
        wait_for_completion(ip->i_flush_wq);
        return 0;
}

xfs_ifunlock(
        xfs_inode_t     *ip)
{
        xfs_iflags_clear(ip, XFS_IFLUSH_INPROGRESS);
        complete(ip->i_flush_wq);
}

*However*, given that we already have this exact state in the
completion itself, I see little reason for adding the additional
locking overhead and the complexity of race conditions of keeping
this state coherent with the completion. Modifying the completion
API slightly to export this state is the simplest, easiest solution
to the problem....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx


<Prev in Thread] Current Thread [Next in Thread>