On Wed, 2010-08-04 at 08:56 +1000, Dave Chinner wrote:
> On Tue, Aug 03, 2010 at 10:34:25AM -0700, Mingming Cao wrote:
> > On Fri, 2010-07-30 at 14:53 +1000, Dave Chinner wrote:
> > > On Thu, Jul 29, 2010 at 08:53:24PM -0600, Matthew Wilcox wrote:
> > > > On Fri, Jul 30, 2010 at 08:45:16AM +1000, Dave Chinner wrote:
> > > > > If we get two unaligned direct IO's to the same filesystem block
> > > > > that is marked as a new allocation (i.e. buffer_new), then both IOs
> > > > > will
> > > > > zero the portion of the block they are not writing data to. As a
> > > > > result, when the IOs complete there will be a portion of the block
> > > > > that contains zeros from the last IO to complete rather than the
> > > > > data that should be there.
> > > I don't want any direct IO for XFS to go through the page cache -
> > > unaligned or not. using the page cache for the unaligned blocks
> > > would also be much worse for performance that this method because it
> > > turns unaligned direct IO into 3 IOs - the unaligned head block, the
> > > aligned body and the unaligned tail block. It would also be a
> > > performance hit you take on every single dio, whereas this way the
> > > hit is only taken when an overlap is detected.
> > Does this problem also possible for DIO and non AIO case? (Ext4 case
> > this only happy with AIO+DIO+unaligned). If not, could we simply force
> > unaligned AIO+DIO to be synchronous? Still direct IO...
> There is nothing specific to AIO about this bug. XFS (at least)
> allows concurrent DIO writes to the same inode regardless of whether
> they are dispatched via AIO or multiple separate threads and so the
> race condition exists outside just the AIO context...
Okay..yeah ext4 prevent direct IO write to the same inode from multiple
threads, so this is not a issue for non-aio case.
How does XFS serialize direct IO (aligned) to the same file offset(or
overlap) from multiple threads?