[Top] [All Lists]

Re: [PATCH 0/5] xfs: more patches for 3.13

To: Ben Myers <bpm@xxxxxxx>
Subject: Re: [PATCH 0/5] xfs: more patches for 3.13
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 7 Nov 2013 12:57:06 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20131106230133.GX1935@xxxxxxx>
References: <1383280040-21979-1-git-send-email-david@xxxxxxxxxxxxx> <20131106230133.GX1935@xxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Nov 06, 2013 at 05:01:33PM -0600, Ben Myers wrote:
> On Fri, Nov 01, 2013 at 03:27:15PM +1100, Dave Chinner wrote:
> > Hi folks,
> > 
> > The following series follows up the recently committed series of
> > patches for 3.13. The first two patches are the remaining
> > uncommitted patches from the previous series.
> > 
> > The next two patches are tracing patches, one for AIL manipulations
> > and the other for AGF and AGI read operations. Both of these were
> > written during recent debugging sessions, and both proved useful so
> > should be added to the menagerie of tracepoints we already have
> > avaialble.
> > 
> > The final patch is the increasing of the inode cluster size for v5
> > filesystems. I'd like to get this into v5 filesystems for 3.13 so we
> > get wider exposure of it ASAP so we have more data available to be
> > able to make informed decisions about how to bring this back to v4
> > filesystems in a safe and controlled manner.
> Applied 3 and 4.  I still don't understand why the locking on patch 2 is
> correct.  Seems like the readers of i_version hold different locks than we do
> when we log the inode.  Maybe Christoph can help me with that.

Readers don't need to hold a spinlock, and many don't. The spinlock
is only there to prevent concurrent updates from "losing" an update
due to races.  All modifications to XFS inodes occur via
transactions, inodes are locked exclusively in transactions and
hence we will never lose i_version updates due to races. Hence we
don't need the spinlock during the update, either.


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>