xfs
[Top] [All Lists]

direct IO question

To: xfs <xfs@xxxxxxxxxxx>
Subject: direct IO question
From: Utako Kusaka <u-kusaka@xxxxxxxxxxxxx>
Date: Tue, 10 May 2011 14:41:51 +0900
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; ja; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10
Hi,

When I tested concurrent mmap write and direct IO to the same file,
it was corrupted. Kernel version is 2.6.39-rc4.
I have two questions concerning xfs direct IO.

The first is dirty pages are released in direct read. xfs direct IO uses
xfs_flushinval_pages(), which writes out and releases dirty pages.
If pages are marked as dirty after filemap_write_and_wait_range(),
they will be released in truncate_inode_pages_range() without writing out.

sys_read()
  vfs_read()
    do_sync_read()
      xfs_file_aio_read()
        xfs_flushinval_pages()
          filemap_write_and_wait_range()
          truncate_inode_pages_range()      <---
        generic_file_aio_read()
          filemap_write_and_wait_range()
          xfs_vm_direct_IO()

ext3 calls generic_file_aio_read() only and does not call
truncate_inode_pages_range().

sys_read()
  vfs_read()
    do_sync_read()
      generic_file_aio_read()
        filemap_write_and_wait_range()
        ext3_direct_IO()

xfs_file_aio_read() and xfs_file_dio_aio_write() call generic function. And
both xfs functions and generic functions call filemap_write_and_wait_range().
So I wonder whether xfs_flushinval_pages() is necessary.


Then, the write range in xfs_flushinval_pages() called from direct IO is
from start pos to -1, or LLONG_MAX, and is not IO range. Is there any reason?
In generic_file_aio_read and generic_file_direct_write(), it is from start pos
to (pos + len - 1).
I think xfs_flushinval_pages() should be called with same range.

Regards,
Utako

<Prev in Thread] Current Thread [Next in Thread>