On Thu, 2002-08-15 at 17:41, Christian Rice wrote:
> I'm wondering if this is a recoverable situation:
Ugh, so you actually ran repair on the filesystem here by
the look of it. Those values in the super block are very
strange, 18446744073709551615 is -1 as a 64 bit value.
Hopefully the dirty log prevented repair from actually
doing something to the disk. Have you attempted to mount
the filesystem without running repair first? In general
after some form of crash just mounting the fs is the
best thing to do.
Also please update your xfsprogs to the latest version from
oss, there are almost certainly bug fixes since the copy you
So try and mount the fs, report what that does, then if it
mounted, unmount it and try running xfs_check on the filesystem
and send us the output. It may also be useful to do dd off the
first 4K of the filesystem and send that as well:
dd if=/dev/hdb3 of =xxx bs=4k count=1
Do you know what happened to the system, was this a loss of power
or a software crash? If there was valuable information on the disk
then we can try and help get some of it back. It sounds like it
was a system disk though.
> [root@ozu root]# xfs_repair /dev/hdb3
> Phase 1 - find and verify superblock...
> sb root inode value 18446744073709551615 inconsistent with calculated
> value 13835051801809780864
> resetting superblock root inode pointer to 18446744069414584448
> sb realtime bitmap inode 18446744073709551615 inconsistent with
> calculated value 13835051801809780865
> resetting superblock realtime bitmap ino pointer to 18446744069414584449
> sb realtime summary inode 18446744073709551615 inconsistent with
> calculated value 13835051801809780866
> resetting superblock realtime summary ino pointer to
> Phase 2 - using internal log
> - zero log...
> ERROR: The filesystem has valuable metadata changes in a log which needs
> to be replayed. Mount the filesystem to replay the log, and unmount it
> before re-running xfs_repair. If you are unable to mount the
> filesystem, then use the -L option to destroy the log and attempt a
> Note that destroying the log may cause corruption -- please attempt a
> mount of the filesystem before doing this.
> In the past, sometimes the entire contents of the disk end up in
> lost+found after xfs_repair -L. I ran xfs_repair -n, and it seemed to
> want to unlink quite a few inodes (thousands, including system files
> that could not have possibly been in active use during operation). Yes,
> the system crashed, I'm not absolutely positive I had write caching
> turned off (I've been using hdparm -W 0) on this system.
> It was running 2.4.18 with xfs 1.1, not the latest CVS stuff. Also, I
> ran the checks on a system with xfsprogs-2.0.3-0.rpm installed.
> Anybody can offer any hope, or do I mkfs now? If there's a recovery
> from this, that would save me and my sysadmins countless hours of work
> into the future, as we have perhaps 100 machines running xfs on linux.
> christian rice director of technology
> tippett studio 510.649.9711 l--xr-----
Steve Lord voice: +1-651-683-3511
Principal Engineer, Filesystem Software email: lord@xxxxxxx