direct IO question
Utako Kusaka
u-kusaka at wm.jp.nec.com
Tue May 10 00:41:51 CDT 2011
Hi,
When I tested concurrent mmap write and direct IO to the same file,
it was corrupted. Kernel version is 2.6.39-rc4.
I have two questions concerning xfs direct IO.
The first is dirty pages are released in direct read. xfs direct IO uses
xfs_flushinval_pages(), which writes out and releases dirty pages.
If pages are marked as dirty after filemap_write_and_wait_range(),
they will be released in truncate_inode_pages_range() without writing out.
sys_read()
vfs_read()
do_sync_read()
xfs_file_aio_read()
xfs_flushinval_pages()
filemap_write_and_wait_range()
truncate_inode_pages_range() <---
generic_file_aio_read()
filemap_write_and_wait_range()
xfs_vm_direct_IO()
ext3 calls generic_file_aio_read() only and does not call
truncate_inode_pages_range().
sys_read()
vfs_read()
do_sync_read()
generic_file_aio_read()
filemap_write_and_wait_range()
ext3_direct_IO()
xfs_file_aio_read() and xfs_file_dio_aio_write() call generic function. And
both xfs functions and generic functions call filemap_write_and_wait_range().
So I wonder whether xfs_flushinval_pages() is necessary.
Then, the write range in xfs_flushinval_pages() called from direct IO is
from start pos to -1, or LLONG_MAX, and is not IO range. Is there any reason?
In generic_file_aio_read and generic_file_direct_write(), it is from start pos
to (pos + len - 1).
I think xfs_flushinval_pages() should be called with same range.
Regards,
Utako
More information about the xfs
mailing list