xfs
[Top] [All Lists]

xfsdump segfaults, xfs_repair suffers fatality

To: linux-xfs@xxxxxxxxxxx
Subject: xfsdump segfaults, xfs_repair suffers fatality
From: Kacper Wysocki <kacperw@xxxxxxxxx>
Date: Sun, 13 Oct 2002 06:47:28 -0400
Sender: linux-xfs-bounce@xxxxxxxxxxx
Hi,
I'm running a hardware raid 0 system with 10 disks, one of which recently(er, today) failed. This was a good 40GB of data on a 70GB logical drive, and it's all striped, which usually means it's all lost. But not really, cause you'd think the data on all the drives that didn't fail still is there? So I managed to replace the disk, rebuild the fs (with xfs_repair) and get a running xfs file system with a good amount of lost+founds. Problem is, xfs_repair failed with the following message:

fatal error -- can't read block 0 for directory inode 50331793

I assume this is an I/O error, as that's where xfs_repair can't help. I'm thinking the only way to remedy this is to dump the fs and low-level format the drive, but I don't have the storage space currently.

Also, after this error message I already had quite the populated lost+found directory, and went about looking through it. About half way through I had the crazy thought of re-running xfs_repair without realizing this would remove all previous lost+found entries!! Now, about a year ago I posted to this list about how to recover deleted files in the xfs file system, and got to know how it's virtually impossible. What was suggested then was that one could "xfsdump -J - /dev/rd/c0d0p1 | xfsrestore - dump -i" to recover some of the data. In addition my (very limited) understanding of filesystems and I/O tells me someone should be able to write an application that searches the drive(dump) in question and pulls out any complete files with recognized file type it finds (based on magic no's etc). I'm thinking someone already wrote such an application, and I'm thinking it's GPL'ed or something. (HA! *hoping* more like)

There are two problems with the above:

1. I haven't been able to find such an application

2. xfsdump segfaults. It's weird, because I know It's worked before on the same system.
A simple xfsdump - /dev/rd/c0d0p1 yeilds the following:
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 2.2.2 (dump format 3.0) - Running single-threaded
xfsdump: WARNING: most recent level 0 dump was interrupted, but not resuming that dump since resume (-R) option not specified
xfsdump: level 0 dump of comotion:/home
xfsdump: dump date: Sun Oct 13 06:37:51 2002
xfsdump: session id: 04323a2f-089e-47e5-afa8-71b2f6e3b6fd
xfsdump: session label: ""
xfsdump: ino map phase 1: skipping (no subtrees specified)
xfsdump: ino map phase 2: constructing initial dump list
xfsdump: ino map phase 3: skipping (no pruning necessary)
xfsdump: ino map phase 4: skipping (size estimated in phase 2)
xfsdump: ino map phase 5: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 13563575104 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
Segmentation fault

Could you please tell me what I should do to get xfsdump working? What is wrong with xfs_repair? Is it because I've seriously b0rked the fs? Does anyone know about an app fitting my abovementioned description?


Sincerely,
        Kacper Wysocki


<Prev in Thread] Current Thread [Next in Thread>
  • xfsdump segfaults, xfs_repair suffers fatality, Kacper Wysocki <=