xfs
[Top] [All Lists]

Re: XFS and write barriers.

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: XFS and write barriers.
From: David Chinner <dgc@xxxxxxx>
Date: Sun, 25 Mar 2007 14:51:26 +1100
Cc: David Chinner <dgc@xxxxxxx>, Neil Brown <neilb@xxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20070323095055.GA13478@xxxxxxxxxxxxx>
References: <17923.11463.459927.628762@xxxxxxxxxxxxxx> <20070323053043.GD32602149@xxxxxxxxxxxxxxxxx> <20070323095055.GA13478@xxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Fri, Mar 23, 2007 at 09:50:55AM +0000, Christoph Hellwig wrote:
> On Fri, Mar 23, 2007 at 04:30:43PM +1100, David Chinner wrote:
> > On Fri, Mar 23, 2007 at 12:26:31PM +1100, Neil Brown wrote:
> > > 
> > > Hi,
> > >  I have two concerns related to XFS and write barrier support that I'm
> > >  hoping can be resolved.
> > > 
> > > Firstly in xfs_mountfs_check_barriers in fs/xfs/linux-2.6/xfs_super.c,
> > > it tests ....->queue->ordered to see if that is QUEUE_ORDERED_NONE.
> > > If it is, then barriers are disabled.
> > > 
> > > I think this is a layering violation - xfs really has no business
> > > looking that deeply into the device.
> > 
> > Except that the device behaviour determines what XFS needs to do
> > and there used to be no other way to find out.
> > 
> > Christoph, any reason for needing this check anymore? I can't see
> > any particular reason for needing to do this as __make_request()
> > will check it for us when we test now.
> 
> When I first implemented it I really dislike the idea of having request
> fail asynchrnously due to the lack of barriers.  Then someone (Jens?)
> told me we need to do this check anyway because devices might lie to
> us, at which point I implemented the test superblock writeback to
> check if it actually works.
> 
> So yes, we could probably get rid of the check now, although I'd
> prefer the block layer exporting an API to the filesystem to tell
> it whether there is any point in trying to use barriers.

Ditto.

> > > Secondly, if a barrier write fails due to EOPNOTSUPP, it should be
> > > retried without the barrier (after possibly waiting for dependant
> > > requests to complete).  This is what other filesystems do, but I
> > > cannot find the code in xfs which does this.
> > 
> > XFS doesn't handle this - I was unaware that the barrier status of the
> > underlying block device could change....
> > 
> > OOC, when did this behaviour get introduced?
> 
> That would be really bad.  XFS metadata buffers can have multiple bios
> and retrying a single one would be rather difficult.
> 
> > +   /*
> > +    * We can get an EOPNOTSUPP to ordered writes.  Here we clear the
> > +    * ordered flag and reissue them.  Because we can't tell the higher
> > +    * layers directly that they should not issue ordered I/O anymore, they
> > +    * need to check if the ordered flag was cleared during I/O completion.
> > +    */
> > +   if ((bp->b_error == EOPNOTSUPP) &&
> > +       (bp->b_flags & (XBF_ORDERED|XBF_ASYNC)) == (XBF_ORDERED|XBF_ASYNC)) 
> > {
> > +           XB_TRACE(bp, "ordered_retry", bp->b_iodone);
> > +           bp->b_flags &= ~XBF_ORDERED;
> > +           xfs_buf_iorequest(bp);
> > +   } else if (bp->b_iodone)
> >             (*(bp->b_iodone))(bp);
> >     else if (bp->b_flags & XBF_ASYNC)
> >             xfs_buf_relse(bp);
> 
> So you're retrying the whole I/O, this is probably better than trying
> to handle this at the bio level.  I still don't quite like doing another
> I/O from the I/O completion handler.

You're not the only one, Christoph. This may be better than trying
to handle it at lower layers, and far better than having to handle
it at every point in the higher layers where we may issue barrier
I/Os. 

But I *seriously dislike* having to reissue async I/Os in this
manner and then having to rely on a higher layer's I/o completion
handler to detect the fact that the I/O was retried to change the
way the filesystem issues I/Os in the future. It's a really crappy
way of communicating between layers....

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group


<Prev in Thread] Current Thread [Next in Thread>