xfs
[Top] [All Lists]

Re: [PATCH] xfs: improve sync behaviour in face of aggressive dirtying

To: xfs@xxxxxxxxxxx
Subject: Re: [PATCH] xfs: improve sync behaviour in face of aggressive dirtying
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Mon, 20 Jun 2011 04:18:02 -0400
Cc: Wu Fengguang <fengguang.wu@xxxxxxxxx>
In-reply-to: <20110617131401.GC2141@xxxxxxxxxxxxx>
References: <20110617131401.GC2141@xxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
As confirmed by Wu moving to a workqueue flush as in the test patch
below brings our performance on par with other filesystems.  But there's
a major and a minor issues with that.

The minor one is that we always flush all work items and not just those
on the filesystem to be flushed.  This might become an issue for lager
systems, or when we apply a similar scheme to fsync, which has the same
underlying issue.

The major one is that flush_workqueue only flushed work items that were
queued before it was called, but we can requeue completions when we fail
to get the ilock in xfs_setfilesize, which can lead to losing i_size
updates when it happens.

I see two ways to fix this:  either we implement our own workqueue
look-alike based on the old workqueue code.  This would allow flushing
queues per-sb or even per-inode, and allow us to special case flushing
requeues as well before returning.

Or we copy the scheme ext4 uses for fsync (it completely fails to flush
the completion queue for plain sync), that is add a list of pending
items to the inode, and a lock to protect it.  I don't like this because
it a) bloats the inode, b) adds a lot of complexity, and c) another lock
to the irq I/O completion.

Any other ideas?


Index: xfs/fs/xfs/linux-2.6/xfs_sync.c
===================================================================
--- xfs.orig/fs/xfs/linux-2.6/xfs_sync.c        2011-06-17 14:16:18.442399481 
+0200
+++ xfs/fs/xfs/linux-2.6/xfs_sync.c     2011-06-18 17:55:44.864025123 +0200
@@ -359,14 +359,16 @@ xfs_quiesce_data(
 {
        int                     error, error2 = 0;
 
-       /* push non-blocking */
-       xfs_sync_data(mp, 0);
        xfs_qm_sync(mp, SYNC_TRYLOCK);
-
-       /* push and block till complete */
-       xfs_sync_data(mp, SYNC_WAIT);
        xfs_qm_sync(mp, SYNC_WAIT);
 
+       /* flush all pending size updates and unwritten extent conversions */
+       flush_workqueue(xfsconvertd_workqueue);
+       flush_workqueue(xfsdatad_workqueue);
+
+       /* force out the newly dirtied log buffers */
+       xfs_log_force(mp, XFS_LOG_SYNC);
+
        /* write superblock and hoover up shutdown errors */
        error = xfs_sync_fsdata(mp);
 
Index: xfs/fs/xfs/linux-2.6/xfs_super.c
===================================================================
--- xfs.orig/fs/xfs/linux-2.6/xfs_super.c       2011-06-18 17:51:05.660705925 
+0200
+++ xfs/fs/xfs/linux-2.6/xfs_super.c    2011-06-18 17:52:50.107367305 +0200
@@ -929,45 +929,12 @@ xfs_fs_write_inode(
                 * ->sync_fs call do that for thus, which reduces the number
                 * of synchronous log foces dramatically.
                 */
-               xfs_ioend_wait(ip);
                xfs_ilock(ip, XFS_ILOCK_SHARED);
-               if (ip->i_update_core) {
+               if (ip->i_update_core)
                        error = xfs_log_inode(ip);
-                       if (error)
-                               goto out_unlock;
-               }
-       } else {
-               /*
-                * We make this non-blocking if the inode is contended, return
-                * EAGAIN to indicate to the caller that they did not succeed.
-                * This prevents the flush path from blocking on inodes inside
-                * another operation right now, they get caught later by
-                * xfs_sync.
-                */
-               if (!xfs_ilock_nowait(ip, XFS_ILOCK_SHARED))
-                       goto out;
-
-               if (xfs_ipincount(ip) || !xfs_iflock_nowait(ip))
-                       goto out_unlock;
-
-               /*
-                * Now we have the flush lock and the inode is not pinned, we
-                * can check if the inode is really clean as we know that
-                * there are no pending transaction completions, it is not
-                * waiting on the delayed write queue and there is no IO in
-                * progress.
-                */
-               if (xfs_inode_clean(ip)) {
-                       xfs_ifunlock(ip);
-                       error = 0;
-                       goto out_unlock;
-               }
-               error = xfs_iflush(ip, SYNC_TRYLOCK);
+               xfs_iunlock(ip, XFS_ILOCK_SHARED);
        }
 
- out_unlock:
-       xfs_iunlock(ip, XFS_ILOCK_SHARED);
- out:
        /*
         * if we failed to write out the inode then mark
         * it dirty again so we'll try again later.

<Prev in Thread] Current Thread [Next in Thread>