xfs
[Top] [All Lists]

Re: [Patch] xfs: serialise unaligned direct IOs

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: [Patch] xfs: serialise unaligned direct IOs
From: Amit Sahrawat <amit.sahrawat83@xxxxxxxxx>
Date: Thu, 3 Nov 2011 16:49:53 +0530
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=4B9tdeovyeaB1XHk5U/wA1gqWhVAm0M2ieUDAAF4eiw=; b=A7JkTZmQ0eFoamBMoKjlBhU9tK3z+wUWnphefOmQ6mtq3oHnaoa5FsZgGepRHdmRzj 9dKzA7b3j8k85XQkBbSrzkjIvG3V5drOLXbz15b3B0XlWdELo98LgsZBh+r7nn8A6eZW V7E8+HA6kHON1A7Lpm/8DAGyGz9WSKqMeFnv8=
In-reply-to: <20111103100647.GA1002@xxxxxxxxxxxxx>
References: <CADDb1s0WUfvt8N+hMATboKxbMUZdk2N-R2e=KFH2JvGUjbigBg@xxxxxxxxxxxxxx> <20111103070246.GA10579@xxxxxxxxxxxxx> <CADDb1s3UN4HMKEA2kSEM0HsUCC7DE63B1oJAoL6QpqXBdDCEqQ@xxxxxxxxxxxxxx> <20111103100647.GA1002@xxxxxxxxxxxxx>
After all modifications and running checkpatch.pl.

Thanks & Regards,
Amit Sahrawat

xfs: serialise unaligned direct IOs

[ This patch published in 2.6.38 kernel(Original reference
http://oss.sgi.com/archives/xfs/2011-01/msg00013.html), but
can not be applied to 2.6.35 kernel directly, because of the
absence of required function, its reimplmented to resolve
xfstest test 240 fail.]

When two concurrent unaligned, non-overlapping direct IOs are issued
to the same block, the direct Io layer will race to zero the block.
The result is that one of the concurrent IOs will overwrite data
written by the other IO with zeros. This is demonstrated by the
xfsqa test 240.

To avoid this problem, serialise all unaligned direct IOs to an
inode with a big hammer. We need a big hammer approach as we need to
serialise AIO as well, so we can't just block writes on locks.
Hence, the big hammer is calling xfs_ioend_wait() while holding out
other unaligned direct IOs from starting.

We don't bother trying to serialised aligned vs unaligned IOs as
they are overlapping IO and the result of concurrent overlapping IOs
is undefined - the result of either IO is a valid result so we let
them race. Hence we only penalise unaligned IO, which already has a
major overhead compared to aligned IO so this isn't a major problem.

Signed-off-by: Dave Chinner <david@xxxxxxxxxxxxx>
Signed-off-by: Amit Sahrawat <amit.sahrawat83@xxxxxxxxx>
Signed-off-by: Ajeet Yadav <ajeet.yadav.77@xxxxxxxxx>

diff -Nurp linux-Orig/fs/xfs/linux-2.6/xfs_file.c
linux-Updated/fs/xfs/linux-2.6/xfs_file.c
--- linux-Orig/fs/xfs/linux-2.6/xfs_file.c      2011-11-02 12:10:52.000000000 
+0530
+++ linux-Updated/fs/xfs/linux-2.6/xfs_file.c   2011-11-03
16:39:08.000000000 +0530
@@ -587,6 +587,7 @@ xfs_file_aio_write(
        xfs_fsize_t             isize, new_size;
        int                     iolock;
        size_t                  ocount = 0, count;
+       int                     unaligned_io = 0;
        int                     need_i_mutex;

        XFS_STATS_INC(xs_write_calls);
@@ -640,8 +641,26 @@ start:
                        xfs_iunlock(ip, XFS_ILOCK_EXCL|iolock);
                        return XFS_ERROR(-EINVAL);
                }
+       /*
+        * In most cases the direct IO writes will be done with IOLOCK_SHARED
+        * allowing them to be done in parallel with reads and other direct IO
+        * writes. However,if the IO is not aligned to filesystem blocks, the
+        * direct IO layer needs to do sub-block zeroing and that requires
+        * serialisation against other direct IOs to the same block. In this
+        * case we need to serialise the submission of the unaligned IOs so
+        * that we don't get racing block zeroing in the dio layer.
+        * To avoid the problem with aio, we also need to wait for outstanding
+        * IOs to complete so that unwritten extent conversion is completed
+        * before we try to map the overlapping block. This is currently
+        * implemented by hitting it with a big hammer (i.e. xfs_ioend_wait()).
+        */
+
+               if ((pos & mp->m_blockmask) ||
+                  ((pos + count) & mp->m_blockmask))
+                       unaligned_io = 1;

-               if (!need_i_mutex && (mapping->nrpages || pos > ip->i_size)) {
+               if (!need_i_mutex &&
+                  (unaligned_io || mapping->nrpages || pos > ip->i_size)) {
                        xfs_iunlock(ip, XFS_ILOCK_EXCL|iolock);
                        iolock = XFS_IOLOCK_EXCL;
                        need_i_mutex = 1;
@@ -700,12 +719,18 @@ start:
                }

                if (need_i_mutex) {
-                       /* demote the lock now the cached pages are gone */
-                       xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
-                       mutex_unlock(&inode->i_mutex);
-
-                       iolock = XFS_IOLOCK_SHARED;
-                       need_i_mutex = 0;
+                       if (unaligned_io)
+                               xfs_ioend_wait(ip);
+                       else {
+                               /*
+                                * demote the lock now the cached pages
+                                * are gone if we can
+                                */
+                               xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
+                               iolock = XFS_IOLOCK_SHARED;
+                               mutex_unlock(&inode->i_mutex);
+                               need_i_mutex = 0;
+                       }
                }

                trace_xfs_file_direct_write(ip, count, iocb->ki_pos, ioflags);




On Thu, Nov 3, 2011 at 3:36 PM, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
> On Thu, Nov 03, 2011 at 03:29:18PM +0530, Amit Sahrawat wrote:
>> > You probably should keep the original Signoff and reviewed-by tags,
>> > and add your editor note on the top into [ ] ?brackets.
>> Ok, will do so in the final patch. Actually was unaware of information
>> to keep in backported patches?
>
> The standard procedure is to keep patches basically as-is.  This doesn't
> quite apply for your case, so I think just adding a comment in
> [ ] brackets on the top is the best you can do.
>
>> > You also need to make the i_mutex unlock and need_i_mutex update
>> > conditional here, otherwise you still serialize all O_DIRECT writes.
>> >
>> you mean, keeping need_i_mutex=0 and mutex_unlock as part of 'else' 
>> statement.
>
> Yes.
>
>

<Prev in Thread] Current Thread [Next in Thread>