[Top] [All Lists]

Re: the thing with the binary zeroes

To: linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
Subject: Re: the thing with the binary zeroes
From: Andi Kleen <ak@xxxxxx>
Date: Fri, 11 Feb 2005 14:55:10 +0100
Cc: madduck@xxxxxxxxxxx
In-reply-to: <20050211133558.GA32501@xxxxxxxxxxxxxxxxxxxxx> (martin f. krafft's message of "Fri, 11 Feb 2005 14:35:58 +0100")
References: <20050211121829.GA30049@xxxxxxxxxxxxxxxxxxxxx> <m1sm43uu8h.fsf@xxxxxx> <20050211131546.GA32336@xxxxxxxxxxxxxxxxxxxxx> <m1oeeruswr.fsf@xxxxxx> <20050211133558.GA32501@xxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Gnus/5.110002 (No Gnus v0.2) Emacs/21.3 (gnu/linux)
martin f krafft <madduck@xxxxxxxxxxx> writes:

> also sprach Andi Kleen <ak@xxxxxx> [2005.02.11.1429 +0100]:
>> I've read explanations similar to mine several time on the
>> list and also given them occasionally myself.
> Again, not to be read as a personal attack, but your explanation is
> not what I was looking for. It added very little to the description
> I included in the first post.

Well, it's the full story. Nothing to add.

> Maybe this is something that could be considered for future XFS
> versions? A tool that can ignore the zeroing-precaution and simply
> give access to the data the inode points to, even though it would
> not normally connect the two.

It can't. The pointers from the inode to the extents with the data
are overwritten at this point.

The only good "fix" probably would be to make XFS flush metadata less
aggressively. If the metadata was always flushed at roughly the same time
as  the file data is written you would rarely see this.

But I suspect doing that would need large scale rewrites and
redesign in the log module. It currently uses an inefficient format
to store log buffers, which prevents the log from buffering too much.

You can decrease the flush delay for file data though with the
sysctls i pointed out earlier. That will not fix it, but will make
the time window where it can happen smaller.


<Prev in Thread] Current Thread [Next in Thread>