On 01.07.2003 23:33 Christian Guggenberger wrote:
Hi Folks,
Just, after starting a weekly xfs_fsr on a 400GB lvm device (HW Raid),
the filesystem shut down unclean. Now i can't mount it again.
This filesystem was exported via nfs.
just an hour later, the second volume on the hardware Raid5 crashed. (6*160GB)
Both filesystems are not mountable and not repairable with xfs_repair
(xfs_repair -L should do, but I have not tried yet).
If I trust the HW Raid's Firmware it should be running fine, although one of
the disks has some bad blocks _and_ I/O timeouts.
I'm running Kernel
SGI XFS snapshot 2.4.20-2003-01-14_00:43_UTC with ACLs, quota, no debug enabled
Here's the kernel log for mounting both volumes with default options:
XFS mounting filesystem lvm(58,0)
Starting XFS recovery on filesystem: lvm(58,0) (dev: 58/0)
XFS: failed to read root inode
XFS mounting filesystem lvm(58,1)
Starting XFS recovery on filesystem: lvm(58,1) (dev: 58/1)
XFS: dirty log entry has mismatched uuid - can't recover
XFS: log mount/recovery failed
XFS: log mount failed
what should I do now?
Try xfs_repair -L ? Go ahead with recent cvs Kernels?
Christian
|