[PATCH 5/5] xfs: don't chain ioends during writepage submission
Brian Foster
bfoster at redhat.com
Wed Feb 10 07:18:17 CST 2016
On Wed, Feb 10, 2016 at 08:59:00AM +1100, Dave Chinner wrote:
> On Tue, Feb 09, 2016 at 09:23:55AM -0500, Brian Foster wrote:
> > On Mon, Feb 08, 2016 at 04:44:18PM +1100, Dave Chinner wrote:
> > > @@ -738,29 +726,22 @@ xfs_writepage_submit(
> > > struct writeback_control *wbc,
> > > int status)
> > > {
> > > - struct blk_plug plug;
> > > -
> > > - /* Reserve log space if we might write beyond the on-disk inode size. */
> > > - if (!status && wpc->ioend && wpc->ioend->io_type != XFS_IO_UNWRITTEN &&
> > > - xfs_ioend_is_append(wpc->ioend))
> > > - status = xfs_setfilesize_trans_alloc(wpc->ioend);
> > > -
> > > - if (wpc->iohead) {
> > > - blk_start_plug(&plug);
> > > - xfs_submit_ioend(wbc, wpc->iohead, status);
> > > - blk_finish_plug(&plug);
> > > - }
> >
> > We've dropped our plug here but I don't see anything added in
> > xfs_vm_writepages(). Shouldn't we have one there now that ioends are
> > submitted as we go? generic_writepages() uses one around its
> > write_cache_pages() call..
>
> It's not really necessary, as we now have higher level plugging in
> the writeback go will get flushed on context switch, and if we don't
> have a high level plug (e.g. fsync triggered writeback), then we
> submit the IO immediately, just like flushing the plug here would do
> anyway....
>
Ok, I'm digging around the wb code a bit and I see plugs in/around
wb_writeback(), so I assume that's what you're referring to in the first
case. I'm not quite following the fsync case though...
In the current upstream code, fsync() leads to the following call chain:
filemap_write_and_wait_range()
__filemap_fdatawrite_range()
do_writepages()
xfs_vm_writepages()
generic_writepages()
blk_start_plug()
write_cache_pages()
blk_finish_plug()
After this series, we have the following:
filemap_write_and_wait_range()
__filemap_fdatawrite_range()
do_writepages()
xfs_vm_writepages()
write_cache_pages()
... with no plug that I can see. What am I missing?
...
> > > - else {
> > > + while (ioend_to_submit) {
> > > + struct xfs_ioend *next = ioend_to_submit->io_list;
> > > +
> > > + ioend_to_submit->io_list = NULL;
> > > + xfs_submit_ioend(wbc, ioend_to_submit, 0);
> > > + ioend_to_submit = next;
> > > + }
> > > + } else {
> > > xfs_aops_discard_page(page);
> > > ClearPageUptodate(page);
> > > unlock_page(page);
> >
> > If we have an error and count == 0, we know that ioend_to_submit is NULL
> > because that is only potentially set once the first buffer is added.
> > That said, this doesn't mean that we don't have an ioend waiting on the
> > wpc. If we do, we still return the error and the ioend is errored out.
> >
> > I wonder if that is really necessary if we haven't added any buffers
> > from the page..? Could we submit the ioend properly in that case? OTOH,
> > that might complicate the error reporting and an error here might be
> > serious enough that it isn't worth it, as opposed to just making sure we
> > clean up everything appropriately.
>
> The way I've done it is the same as the existing code - on error the
> entire ioend chain that has been built is errored out. I'd prefer to
> keep it that way right now to minimise the potential behavioural
> changes of the patch series. We can look to changing to partial
> submission in a separate patch set if it makes sense to do so.
>
I noticed that the existing code is roughly equivalent, so that's fair I
think. Thanks.
Brian
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
More information about the xfs
mailing list