xfs
[Top] [All Lists]

Re: The question about parallel direct IO in xfs

To: Zheng Liu <gnehzuil.liu@xxxxxxxxx>
Subject: Re: The question about parallel direct IO in xfs
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 20 Jan 2012 16:13:04 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20120120035508.GA12703@xxxxxxxxx>
References: <20120120035508.GA12703@xxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Fri, Jan 20, 2012 at 11:55:08AM +0800, Zheng Liu wrote:
> Hi all,
> 
> Recently we encounter an issue in ext4. The issue is that, when we do a direct
> IO, ext4 will acquire inode->i_mutex in generic_file_aio_write(). It declines
> the performance. Here is the detailed conversation.
> http://www.spinics.net/lists/linux-ext4/msg30058.html
> 
> I know that in xfs it uses i_iolock, which is a rw_semaphore, to make parallel
> operations in direct IO. But I have a question. As we do some read/write
> operations in direct IO, it seems that there has a window to cause data
> inconsistency.

Yes, there is. That's a feature, not a bug.

> For example, One thread does a write operation to overlay some
> data at a offset. Meanwhile another thread does a read operation at the same
> offset. We assume that write is earlier than read.

Your assumption is wrong.

> Hence, we should read new
> data. Although it is diffculty to occur, it is possible that read is issued to
> the disk firstly and we read old data. I don't know whether it exists or not 
> in
> xfs. Thank you.

Fundamentally, the result of concurrent read and write direct IO
operations to the same offset is undefined because the filesystem
has no control of IO reordering in lower layers of the storage
stack. IOWs, we give no guarantees for IO ordering or coherency of
concurrent direct IO to the same offset.

If your application requires this sort of coherency, then you either
need to use buffered IO or provide these coherency guarantees
yourself because direct IO doesn't provide them. File range locking
is an example of how your application can coordinate it's IO to
avoid this problem.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>