> > I guess the big one that I'm curious about is: will an xfsdump of a live
> > XFS filesystem produce reliable dumps every time? Or should one
> > definitely be using tar for backups - in which case I guess the problem
> > mentioned here about a week ago(?) that there's currently no easy way to
> > back up just the extended information remains.
> I haven't read the thread in the kernel mailing list, so please
> excuse me if I'm missing the point.
The thread on the linux-kernel list (started by Alexander Viro) described
how doing a "dd if=/dev/hdaX of=/dev/null" (i.e. reading directly from
the block device) on a mounted ext2 filesystem could destroy the filesystem
due to a race condition.
Alexander said initially:
: getblk gives us unlocked, non-uptodate bh
: wait_on_buffer() does nothing
: read from device locks it and starts IO
: we zero it out.
: on-disk data overwrites our zeroes.
: we mark it dirty
: bdflush writes the old data (_not_ zeroes) back to disk.
: block_read() vs. block_write() has the same race. I'm going
: through the list of all wait_on_buffer() users right now.
At least reiserfs came out as being affected too in the following
There's one mention of wait_on_buffer in fs/pagebuf/page_buf_io.c. No
block_read/block_writes that I can see from a quick grep of the source,
but perhaps XFS has its own block interface with the same problem - and
as there was no mention so far here, I thought I'd bring it up.
> Xfsdump will read from the filesystem like tar (but more efficiently) and
> does not need to read directly from the device, so it should be as
> reliable as tar.
Excellent. All the more reason for us to try to move everything here to
As I'm sure many appreciate, backing up huge amounts of other people's data,
and then having to worry that the backup itself could be corrupting their
isn't what I want to be doing... now to go reconfigure our backup systems
run tar - which I guess is a good thing anyway, as I can restore tar onto a
XFS disk later :).