[Top] [All Lists]

Re: UPDATE: low-level XFS drive recovery

To: Adam Milazzo <adam@xxxxxxxxxxxx>
Subject: Re: UPDATE: low-level XFS drive recovery
From: Steve Lord <lord@xxxxxxx>
Date: 24 Apr 2002 09:59:44 -0500
Cc: "'linux-xfs@xxxxxxxxxxx'" <linux-xfs@xxxxxxxxxxx>
In-reply-to: <44D5677E9B8478468DE31AE26DF0AFD6077C92@adsl-64-165-6-11.dsl.sndg02.pacbell. net>
References: <44D5677E9B8478468DE31AE26DF0AFD6077C92@adsl-64-165-6-11.dsl.sndg02.pacbell. net>
Sender: owner-linux-xfs@xxxxxxxxxxx
OK, but on you bullet proof vest and thermally insulated gloves....

Try this for starters:

        mount -o ro,norecovery

It will skip processing the log. You can try copying files out of the
filesystem, hopefully it is in good enough state to open directories and
copy data out of files. It may crash in the process, or if it finds
corruption it may shutdown the filesystem. In that case umount it
and mount it again with the same options, and avoid the paths which
caused the problems.

Get as much data out this way as you can, you cannot make anything
worse with this.

Next run xfs_repair -L (no -n) those files it said it would trash
will probably get put in lost+found by repair, it might actually
keep all of your data.


On Tue, 2002-04-23 at 22:45, Adam Milazzo wrote:
> After running xfs_repair -n, I get some stuff that looks like this:
> entry "backup" in directory inode 128 points to free inode 12583040, would
> junk entry
> entry "desktop" in directory inode 128 points to free inode 16777344, would
> junk entry
> entry "t" in directory inode 128 points to free inode 33554240, would junk
> entry
> ...as well as others...
> These are exactly the three directories I need to recover!!
> However, from the look of the message, it seems like it's going to "junk"
> the entry,
> making it more difficult to recover.
> I'm thinking that directory inode 128 is the root directory, and the inodes
> mentioned
> are the directory inodes of those subdirectories. Is there a way (using
> xfs_db
> perhaps?) to get the inodes and/or extents of the files in those
> directories?
> Could I use this information to recover my files (note that there is
> important
> information regarding the situation in my original post, below)?
> I'm trying to figure out how to use xfs_db to do this...
> If anybody would be kind enough to give me a few instructions on this, or
> point me to
> some documentation about the format of the directory inodes (whatever I
> would need to
> get at the file extents), I would greatly appreciate it.
> Thanks in advance,
> Adam M.
> -----Original Message-----
> From: Adam Milazzo
> To: 'linux-xfs@xxxxxxxxxxx'
> Sent: 4/23/2002 12:36 PM
> Subject: low-level XFS drive recovery
> In a bout of impatient, early-morning foolishness, while trying to
> quickly "format" /dev/hda1 (mounted under /mnt), I did an 'rm -rf /*'
> from a chroot'd shell, but didn't realize until I pressed Enter, that I
> had /dev/hdb1 mounted. However, it was too late, as the second drive was
> nearly instantly wiped clean, taking with it all my important stuff! So
> I went to bed.
> I learned a few lessons, like making better use of 'mount -o ro'.
> However, the damage was already done and I am trying to restore the data
> on that drive. I know that the XFS FAQ claims that there's no way to
> undelete, and that's understandable. However, I might be in luck in this
> case. The drive was freshly formatted and had a number of large files
> copied to it from another drive, and no writing/deleting was done after
> that point (except when it was deleted by rm -rf). Also, no writing has
> been done since the deletion. My hope is that the files are [still] in
> contiguous blocks on the disk.
> My first question is: How likely is it that after writing some (rather
> large, 100 meg average, but up to 700 meg) files to a freshly formatted
> XFS partition, that they would be in contiguous blocks?
> Second: after the rm -rf /* recursed into that drive's mount point and
> did it's dirty work, is there anything left of the directory structure?
> Or was that all wiped out?
> I dumped the first 8 gigs of the drive to a file on another drive, and
> am writing a program to scan that dump file and attempt to pull out
> anything that looks like a data file (basically by scanning for valid
> file headers). However, it's slow work, and is almost useless if the
> files are not stored in a single contiguous blocks (hence my first
> question). Also, if there's anything left of the directory structure
> that I could use to find where files begin and the filename, that would
> be very helpful.
> Perhaps it's relatively trivial to restore the entire drive just by
> rebuilding the directory structure, given the special circumstances of
> my situation. Or perhaps the entire thing would be extremely difficult
> because the files are broken up.
> So, if anybody could provide some information that would be helpful,
> and/or point me to some good information on the low-level details of the
> filesystem structure that might be useful in aiding my recovery of the
> data (or giving me enough information that I can deem it pointless
> without going through all the work), it would be greatly appreciated.
> Also, does anybody know of a good hex editor (or sector editor)? I'm
> looking for the following features (In decreasing order of importance):
> * Fast searching of text and binary data
> * Ability to open huge files (>2gb)
> * A display of the word, dword, and maybe quadword value that begins at
> the byte under the cursor.
> * Ability to select a chunk of data and save it to a disk (directly, or
> in a round-about way)
> * A built-in calculator?
> curses-based would be okay, but something for gnome/X would be nice.
> Thanks a lot in advance!
> Adam M.

Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx

<Prev in Thread] Current Thread [Next in Thread>