xfs
[Top] [All Lists]

Corrupted log after crash

To: linux-xfs@xxxxxxxxxxx
Subject: Corrupted log after crash
From: Randy Gobbel <randy.gobbel@xxxxxxxxx>
Date: Sun, 30 Jan 2005 11:42:02 -0800
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding; b=BeaYOsxrS6anaqhVVv0GSJ39M8Z9BJXhyEcM8Xi1KXi3iDUWrl9AkgPF7ZiU5ErlkbHddwCI7mu3aZcxqtRAognNce+pD83CtjH97zpCdhU7OXKCOVV8zhSiSY9AD/Dhn0OtWS8gpB7Tqrd1pz29y7PBefz81/zVyLC9GGqHqCw=
Reply-to: gobbel@xxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
After a crash, I'm unable to mount my XFS filesystem.  This is the
version of XFS in Linux kernel 2.6.9 with Debian patches.  The
hardware is a RAID5, running from a RocketRaid 374 (kernel module
hpt374) with 4 120GB drives.


xfs_check /dev/sda gives me this:

* ERROR: mismatched uuid in log
*            SB : 0ec97cf8-6f1d-42bf-9c17-b7632b4a8570
*            log: ccbf9118-6f1d-42a3-f517-b7631758a670
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_check.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.



xfs_repair -n /dev/sda gives me this output:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem starting at / ...
        - traversal finished ...
        - traversing all unattached subtrees ...
        - traversals finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.



xfs_logprint -t /dev/sda gives me:

xfs_logprint:
    data device: 0x800
    log device: 0x800 daddr: 351662432 length: 262144

* ERROR: mismatched uuid in log
*            SB : 0ec97cf8-6f1d-42bf-9c17-b7632b4a8570
*            log: ccbf9118-6f1d-42a3-f517-b7631758a670
    log tail: 778594215 head: 2271 state: <DIRTY>



xfs_logprint /dev/sda gives:

xfs_logprint:
    data device: 0x800
    log device: 0x800 daddr: 351662432 length: 262144

Header 0x3b wanted 0xfeedbabe
**********************************************************************
* ERROR: header cycle=59          block=2272                         *
**********************************************************************
Bad log record header


Any suggestions?  It looks like most of the log is probably ok,
despite the UUID mismatch.  How much am I likely to lose if I run
xfs_repair -L ?  There should not have been much activity on this
filesystem for about 1/2 hour before the crash.

The symptoms appear similar to what is described in Bug #194.  The
thread beginning with
http://oss.sgi.com/archives/linux-xfs/2001-07/msg01060.html also
describes identical symptoms (also on a system with a RAID5).

I see that there are a number of changes to xfs between kernels 2.6.9
and 2.6.10.  Are any of these likely to be relevant?

-Randy


<Prev in Thread] Current Thread [Next in Thread>