xfs
[Top] [All Lists]

Re: XFS corruption during power-blackout

To: Chris Wedgwood <cw@xxxxxxxx>
Subject: Re: XFS corruption during power-blackout
From: Bryan Henderson <hbryan@xxxxxxxxxx>
Date: Thu, 30 Jun 2005 12:30:20 -0400
Cc: Al Boldi <a1426z@xxxxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, linux-xfs@xxxxxxxxxxx, Steve Lord <lord@xxxxxxx>, "'Nathan Scott'" <nathans@xxxxxxx>, reiserfs-list@xxxxxxxxxxx
In-reply-to: <254889.27725ab660aa106eb6acc07307d71ef1fbd5b6fd366aebef9e2f611750fbcb467e46e8a4.IBX@taniwha.stupidest.org>
Sender: linux-xfs-bounce@xxxxxxxxxxx
>I don't know if this is true for all drives but NONE of the ones I had
>access to when testing did anything like save the cache --- pretty
>much all data that was inflight got lost.

For another point of reference - were these ATA (personal class) or SCSI 
(commercial class) drives or both?

Is write caching the default on typical SCSI devices?

>Linux does have a concept of
>write barriers but these are presently not implemented for XFS right
>now.  Once they are I assume sync + poweroff will be reliable with
>caching enabled.

But be careful with the 'sync' program/system call.  As defined by POSIX, 
it is not a synchronizing operation.  It's supposed to cause buffered 
writes to get hardened some time soon, not right now.  So in theory, you 
can't pull the plug after typing "sync."  In Linux, the implementation has 
changed a few times in this respect.  In some versions, it at least 
_tries_ to implement "everything that was buffered when sync() started is 
hardened before sync() returns."  In others, it implements "everything 
that was buffered when sync() started is hardened before the next sync() 
returns," and some 'sync' programs do multiple sync()s.  And it's also 
filesystem-type-dependent.  I don't know exactly what the present state 
is.

fsync(), on the other hand, is a true synchronizing operation.

--
Bryan Henderson                     IBM Almaden Research Center
San Jose CA                         Filesystems


<Prev in Thread] Current Thread [Next in Thread>