xfs
[Top] [All Lists]

Re: xfs + rmap?

To: linux-xfs@xxxxxxxxxxx
Subject: Re: xfs + rmap?
From: "Adam McKenna" <adam-dated-1013039770.6a5085@xxxxxxxxxxxx>
Date: Fri, 1 Feb 2002 15:56:09 -0800
In-reply-to: <1012604886.7434.489.camel@xxxxxxxxxxxxxxxxxxxx>
Mail-followup-to: linux-xfs@xxxxxxxxxxx
References: <20020201215351.GF23997@xxxxxxxxxxxx> <1012603950.25088.72.camel@UberGeek> <20020201225013.GH23997@xxxxxxxxxxxx> <1012604886.7434.489.camel@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
User-agent: Mutt/1.3.25i
On Fri, Feb 01, 2002 at 05:08:06PM -0600, Steve Lord wrote:
> > issue 1 - fs/buffer.c:
> > 
> 
> This one is a little nasty in that it seems to remove the possibility of
> a failure at this point, and with xfs in there we do a write_buffer_locked
> call in place of the submit_bh we can definitely fail. I am guessing
> a little bit since I need more context to really work it out, and I
> am not really feeling like patching up yet another kernel version here.

After looking at the code, it seems like leaving the XFS sync_page_buffers 
code in as-is would be the right thing to do here and probably would not 
have any adverse effects, however I'm not even a C programmer, much less a 
kernel programmer, so I'm not 100% comfortable making that call.  Any 
comments would be appreciated.

Here are the two code snippets for informational value:

rmap12:

static void sync_page_buffers(struct buffer_head *head)
{
    struct buffer_head * bh = head;

    do {
        if (!buffer_dirty(bh) && !buffer_locked(bh))
            continue;

        /* Don't start IO first time around.. */
        if (!test_and_set_bit(BH_Wait_IO, &bh->b_state))
            continue;

        /* If we cannot lock the buffer just skip it. */
        if (test_and_set_bit(BH_Lock, &bh->b_state))
            continue;

        /* Second time through we start actively writing out.. */
        if (!atomic_set_buffer_clean(bh)) {
            unlock_buffer(bh);
            continue;
        }

        __mark_buffer_clean(bh);
        get_bh(bh);
        set_bit(BH_launder, &bh->b_state);
        bh->b_end_io = end_buffer_io_sync;
        submit_bh(WRITE, bh);
    } while ((bh = bh->b_this_page) != head);

    return;
}

xfs:

static int sync_page_buffers(struct buffer_head *head)
{
    struct buffer_head * bh = head;
    int tryagain = 0;

    do {
        if (!buffer_dirty(bh) && !buffer_locked(bh))
            continue;

        /* Don't start IO first time around.. */
        if (!test_and_set_bit(BH_Wait_IO, &bh->b_state))
            continue;

        /* Second time through we start actively writing out.. */
        if (test_and_set_bit(BH_Lock, &bh->b_state)) {
            if (!test_bit(BH_launder, &bh->b_state))
                continue;
            wait_on_buffer(bh);
            tryagain = 1;
            continue;
        }

        if (!atomic_set_buffer_clean(bh)) {
            unlock_buffer(bh);
            continue;
        }

        __mark_buffer_clean(bh);
        get_bh(bh);
        set_bit(BH_launder, &bh->b_state);
        tryagain = write_buffer_locked(bh) == 0;
    } while ((bh = bh->b_this_page) != head);

    return tryagain;
}

called by:

if (sync_page_buffers(bh)) {
                /* no IO or waiting next time */
                gfp_mask = 0;
                goto cleaned_buffers_try_again;
            }

--Adam

-- 
Adam McKenna <adam@xxxxxxxxxxxx>   | GPG: 17A4 11F7 5E7E C2E7 08AA
http://flounder.net/publickey.html |      38B0 05D0 8BF7 2C6D 110A


<Prev in Thread] Current Thread [Next in Thread>