[Patch] xfs: serialise unaligned direct IOs

Amit Sahrawat amit.sahrawat83 at gmail.com
Thu Nov 3 01:07:03 CDT 2011


This is needed for long term kernel 2.6.35.14.
Please let me know for any changes/suggestions.

Thanks & Regards,
Amit Sahrawat

xfs: serialise unaligned direct IOs

This patch published in 2.6.38 kernel(Original reference
http://oss.sgi.com/archives/xfs/2011-01/msg00013.html), but
can not be applied to 2.6.35 kernel directly, because of the
absence of required function, its reimplmented to resolve
xfstest test 240 fail.

When two concurrent unaligned, non-overlapping direct IOs are issued
to the same block, the direct Io layer will race to zero the block.
The result is that one of the concurrent IOs will overwrite data
written by the other IO with zeros. This is demonstrated by the
xfsqa test 240.

To avoid this problem, serialise all unaligned direct IOs to an
inode with a big hammer. We need a big hammer approach as we need to
serialise AIO as well, so we can't just block writes on locks.
Hence, the big hammer is calling xfs_ioend_wait() while holding out
other unaligned direct IOs from starting.

We don't bother trying to serialised aligned vs unaligned IOs as
they are overlapping IO and the result of concurrent overlapping IOs
is undefined - the result of either IO is a valid result so we let
them race. Hence we only penalise unaligned IO, which already has a
major overhead compared to aligned IO so this isn't a major problem.

diff -Nurp linux-Orig/fs/xfs/linux-2.6/xfs_file.c
linux-Updated/fs/xfs/linux-2.6/xfs_file.c
--- linux-Orig/fs/xfs/linux-2.6/xfs_file.c	2011-10-28 12:10:52.000000000 +0530
+++ linux-Updated/fs/xfs/linux-2.6/xfs_file.c	2011-10-29
12:34:45.000000000 +0530
@@ -587,6 +587,7 @@ xfs_file_aio_write(
 	xfs_fsize_t		isize, new_size;
 	int			iolock;
 	size_t			ocount = 0, count;
+	int			unaligned_io = 0;
 	int			need_i_mutex;

 	XFS_STATS_INC(xs_write_calls);
@@ -641,7 +642,10 @@ start:
 			return XFS_ERROR(-EINVAL);
 		}

-		if (!need_i_mutex && (mapping->nrpages || pos > ip->i_size)) {
+		if ((pos & mp->m_blockmask) || ((pos + count) & mp->m_blockmask))
+			unaligned_io = 1;
+
+		if (!need_i_mutex && ( unaligned_io || mapping->nrpages || pos >
ip->i_size)) {
 			xfs_iunlock(ip, XFS_ILOCK_EXCL|iolock);
 			iolock = XFS_IOLOCK_EXCL;
 			need_i_mutex = 1;
@@ -700,11 +704,15 @@ start:
 		}

 		if (need_i_mutex) {
-			/* demote the lock now the cached pages are gone */
-			xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
+			if (unaligned_io)
+	                       	xfs_ioend_wait(ip);
+	               /* demote the lock now the cached pages are gone if we can */
+        	        else {
+                 	        xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
+	                        iolock = XFS_IOLOCK_SHARED;
+        	        }
 			mutex_unlock(&inode->i_mutex);

-			iolock = XFS_IOLOCK_SHARED;
 			need_i_mutex = 0;
 		}




More information about the xfs mailing list