[Top] [All Lists]

Re: Problem repairing filesystem

To: Paul Schutte <paul@xxxxxxxx>, XFS mailing list <linux-xfs@xxxxxxxxxxx>
Subject: Re: Problem repairing filesystem
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Wed, 14 Aug 2002 14:18:56 +0200
In-reply-to: <3D5A3174.1A91A496@up.ac.za>
Sender: owner-linux-xfs@xxxxxxxxxxx
At 12:31 14-8-2002 +0200, Paul Schutte wrote:

I ran a ftp server on a pentium II 333Mhz with 256M RAM, using the
2.4.9-31-xfs kernel.
Used 4 x 120 Gb IDE drives in a RAID 5 array on an Adaptec 2400 hardware
raid controller.
There is a 4Gb root partition and a +/- 320Gb data partition.

One of the drives failed and the machine crashed.

Adaptec is not known for there quality of the raid drivers. aacraid comes to mind. I suggest using software raid instead. I like software raid.

We replaced the drive and rebuild the array.

Why rebuild the array when you have hardware raid5. You should be able to boot the degraded array and work from there.

I booted up with a CD that I created a while a go with
2.4.19-pre9-20020604 and mounted a

I understand that the machine did not boot anymore after the crash? Can it be that the drive had write caching which made it fail horribly in the end and crashed the machine?

nfs root partition with all the xfs tools on it.
We ran xfs_repair (version 2.2.1) on the root partition of the raid
A lot of the files have the dreaded zero problem, but apart from that it
is mountable and usable.

The zero problem is fixed in the 1.1 release and should be no longer present. That was one of _the_ important fixes in the 1.1 release.

fatal error -- can't read block 0 for directory inode 2097749

When you mount the filesystem, it is empty (except for lost+found which
is also empty)

Do you have the ability to fetch the current CVS tools and see if that works better?

The output of xfs_repair is large about 300k bzip2'ed. It would be best
if interested parties download it.



Have I lost the 320G partition or does someone still have a trick up
their sleeve ?

I think it is lost, maybe one of the developers has any clues.

Would it be possible to make xfs_repair use a lot less memory ?
My guess is that the filesystem got it's final blow by xfs_repair
exiting prematurely.

Quite possible. There have been some fixes for xfs_repair and the memory usage but I don't think every single case is handled for low memory use.

Did the disk have a lot of small files (in the order of a million files in one directory or so?


It might just be your lucky day, if you only knew.

<Prev in Thread] Current Thread [Next in Thread>