[Top] [All Lists]

Re: XFS Filesystem is broken and cant repair and mount!

To: Dragon <Sunghost@xxxxxx>
Subject: Re: XFS Filesystem is broken and cant repair and mount!
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 10 Oct 2014 08:20:10 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <trinity-c6202f4d-95cc-42f9-a8f2-86e3b9b231a9-1412860507253@3capp-gmx-bs31>
References: <trinity-c6202f4d-95cc-42f9-a8f2-86e3b9b231a9-1412860507253@3capp-gmx-bs31>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Oct 09, 2014 at 03:15:07PM +0200, Dragon wrote:
> Hello, while i copy some files to my software raid device the xfs
> filesystem reports an uncorrectable error unmount and stops.
> Reboot didnt work, same failure. Answers to the FAQS:
> 1.Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux
> 2.xfsprogs 3.1.7+b1 amd64

I'd upgrade xfsprogs before doing anything else.

> 13. dmesg:
> [    7.541885] SGI XFS with ACLs, security attributes, realtime, large 
> block/inode numbers, no debug enabled
> [    7.542692] SGI XFS Quota Management subsystem
> [    7.569679] XFS (md2): Mounting Filesystem
> [    7.799071] XFS (md2): Starting recovery (logdev: internal)
> [    8.992087] XFS (md2): xlog_recover_inode_pass2: Bad inode magic number, 
> dip = 0xffff88031c344400, dino bp = 0xffff88032050d0c0, ino = 3469995060
> [    8.992354] XFS (md2): Internal error xlog_recover_inode_pass2(1) at line 
> 2248 of file /build/linux-eKuxrT/linux-3.2.60/fs/xfs/xfs_log_recover.c.  
> Caller 0xffffffffa03fe677

Bad inode cluster on disk. You need to run xfs_repair on the

I'd suggest running "xfs_repair -n" to see whether that's the only
error and whether it's likely to be able to repair without making a
mess. If you don't have backups, you might want to mount -o
ro,norecovery and take a backup before trying to repair properly.
If you're really paranoid, take a metadump of the filesystem,
restore themetadump to a file and see if repair can fix the image
file first.


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>