[Top] [All Lists]

Re: zero size file after power failure with kernel

To: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Subject: Re: zero size file after power failure with kernel
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Sat, 29 Aug 2009 15:13:32 -0700
Cc: xfs@xxxxxxxxxxx
In-reply-to: <200908292102.21710@xxxxxx>
References: <200908292102.21710@xxxxxx>
User-agent: Thunderbird (Macintosh/20090812)
Michael Monnerie wrote:
I have /home mounted like this:
/dev/sda3 on /disks/work1 type xfs (rw,noatime,logbufs=8,logbsize=256k,attr2,barrier,largeio,swalloc)

Hardware: onboard SATA with a single WD VelociRaptor drive.

My power supply melted and so I had a power fail and a sudden death crash.
( So please remember: even when you have a UPS, your power can fail ! )

After replacing the part, I had almost no isse with my KDE desktop. In earlier XFS releases, I constantly lost several config files all truncated to 0 length or at some point only contained NULLs on such occasions. So the situation improved a lot.

But almost is not good enough: Exactly my kmail config file was 0 sized - obviously: at least when I started kmail, it started fresh without any accounts or config, but once I exited kmail the config was created with the default values and about 12KB size, while my config has >200KB.

Shouldn't it be that this doesn't happen anymore? I'd love to be in a position where I really can rely on a crash not trashing any of my files anymore. I used to have reiserfs previously, and never, not a single time despite many crashes, did I have such an issue. I'd really be pleased so see such stability in XFS. I'm using barriers - what else must I do?

mfg zmi

this will depend on what kde is doing internally as well.

No filesystem can magically protect against buffered data loss on a crash. An application could certainly be doing something that results in this sort of thing. w/o reading some kde code I can't say for sure, and I don't mean to blame KDE, but this isn't necessarily a bug in xfs.


<Prev in Thread] Current Thread [Next in Thread>