xfs
[Top] [All Lists]

Re: the thing with the binary zeroes

To: linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
Subject: Re: the thing with the binary zeroes
From: Andi Kleen <ak@xxxxxx>
Date: Fri, 11 Feb 2005 14:29:08 +0100
Cc: madduck@xxxxxxxxxxx
In-reply-to: <20050211131546.GA32336@xxxxxxxxxxxxxxxxxxxxx> (martin f. krafft's message of "Fri, 11 Feb 2005 14:15:46 +0100")
References: <20050211121829.GA30049@xxxxxxxxxxxxxxxxxxxxx> <m1sm43uu8h.fsf@xxxxxx> <20050211131546.GA32336@xxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Gnus/5.110002 (No Gnus v0.2) Emacs/21.3 (gnu/linux)
martin f krafft <madduck@xxxxxxxxxxx> writes:

> You will note that most of my email is a summary of what you just
> told me. I only ask two questions, none of which you have answered.
> I still very much appreciate your time, but please try to understand
> where I am coming from. If you believe that this issue has been
> sufficiently answered "many times", then I kindly ask you to point

I've read explanations similar to mine several time on the
list and also given them occasionally myself.

> Let's assume that a truncating open() gets interrupted just after
> the metadata are flushed and before the new contents makes it to the
> disk. Then, the old file contents is still on the disk, but XFS
> hides it behind a curtain of zeroes. How can I get at the original
> data in such a case?

You could in theory by grepping the block device and searching 
for the data (it hasn't been physically destroyed unless there
is parallel activity on the fs). But the XFS code can't find it
anymore because the metadata connecting the inode to the file
data is gone. That is why you see the 0s instead.

BTW other journaling fs have similar issues, it's just that their flush
times for data and metadata are more similar, so the race window
where this can happen is much smaller and you rarely see it.

-Andi


<Prev in Thread] Current Thread [Next in Thread>