On Sat, Nov 22, 2003 at 03:21:14PM -0800, Alberto Nava wrote:
> I've done some more digging on this issue. The reason the
> request is going in 4k pages is that the direct-io code is
> giving up in do_direct_IO() and the request is issued as buffer IO :-(.
> The reason do_direct_IO gives up is because the first call to
> get_more_blocks() is returning an unmapped buffer header.
> This is a snip of the code that's failing (look for XXXXX)
Hmm... I don't see a way that we would pass back a buffer that
is not mapped for a direct write. linvfs_get_block_core is the
place to be concentrating if you're digging in the source, that
and xfs_write of course.
> Even if I reserve space for the file, the directio IO fails.
> I tried the same with ext3 and it does perform the directIO on the new
> file. However, I really disklike the sizes that it's using, it's all
> over the place 8k, 160, 200k, etc. I really like the 512K requests I'm
> getting with XFS specially with the 320 SCSI controller I'm using.
> I'll try looking at the XFS code to see why is returning an unmapped bh,
> but some help here would be greatly appreciate as I'm not familiar with
> XFS code.
Well, this doesn't sound like expected behaviour on our part.
We do use the generic direct IO code in 2.6, so we may not be
interacting with that correctly. Can you figure out whether
we are passed in a >4K "blocks" value there (with direct set)
and maybe put a trap in there to see if the buffer_head we've
been asked to setup is mapped or not.
It might also be worth putting a printk in that "if (blocks)"
at the end of linvfs_get_block_core too to see if we are
restricting b_size there on each get_block request.