xfs
[Top] [All Lists]

Re: direct-IO writes strange behavior

To: Alberto Nava <beto@xxxxxxxxxxx>
Subject: Re: direct-IO writes strange behavior
From: Steve Lord <lord@xxxxxxx>
Date: Sun, 23 Nov 2003 11:35:38 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <3FBFEF6A.3000609@xxxxxxxxxxx>
References: <3FBECF7E.6010509@xxxxxxxxxxx> <3FBFEF6A.3000609@xxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031014 Thunderbird/0.3
Alberto Nava wrote:
Hi,

I've done some more digging on this issue. The reason the
request is going in 4k pages is that the direct-io code is
giving up in do_direct_IO() and the request is issued as buffer IO :-(.

The reason do_direct_IO gives up is because the first call to get_more_blocks() is returning an unmapped buffer header.
This is a snip of the code that's failing (look for XXXXX)

static int do_direct_IO(struct dio *dio)
{
    const unsigned blkbits = dio->blkbits;
    const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
    struct page *page;
    unsigned block_in_page;
    struct buffer_head *map_bh = &dio->map_bh;
    int ret = 0;

    /* The I/O can start at any block offset within the first page*/
    block_in_page = dio->first_block_in_page;

    while (dio->block_in_file < dio->final_block_in_request) {
        page = dio_get_page(dio);
        if (IS_ERR(page)) {
            ret = PTR_ERR(page);
            goto out;
        }

        while (block_in_page < blocks_per_page) {
            unsigned offset_in_page = block_in_page << blkbits;
            unsigned this_chunk_bytes;    /* # of bytes mapped */
            unsigned this_chunk_blocks;    /* # of blocks */
            unsigned u;

            if (dio->blocks_available == 0) {
                /*
                 * Need to go and map some more disk
                 */
                unsigned long blkmask;
                unsigned long dio_remainder;

                ret = get_more_blocks(dio);
                if (ret) {
                    page_cache_release(page);
                    goto out;
                }
                if (!buffer_mapped(map_bh))
                    goto do_holes;    XXXXXX
.....
do_holes:
            /* Handle holes */
            if (!buffer_mapped(map_bh)) {
                char *kaddr;

                /* AKPM: eargh, -ENOTBLK is a hack */
                if (dio->rw == WRITE)
                    return -ENOTBLK; XXXXXX


Even if I reserve space for the file, the directio IO fails.

I tried the same with ext3 and it does perform the directIO on the new file. However, I really disklike the sizes that it's using, it's all over the place 8k, 160, 200k, etc. I really like the 512K requests I'm getting with XFS specially with the 320 SCSI controller I'm using.

I'll try looking at the XFS code to see why is returning an unmapped bh, but some help here would be greatly appreciate as I'm not familiar with XFS code.

Thanks
beto



This may be an interaction with the unwritten extent handling code.
Try this on a file system made with the mkfs option -d unwritten=0.

Steve



<Prev in Thread] Current Thread [Next in Thread>