xfs
[Top] [All Lists]

xfs_repair fails with corrupt dinode

To: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Subject: xfs_repair fails with corrupt dinode
From: "SCHEFFER, Philippe" <Philippe.SCHEFFER@xxxxxxxx>
Date: Thu, 29 Jan 2015 16:48:15 +0100
Accept-language: fr-FR
Acceptlanguage: fr-FR
Delivered-to: xfs@xxxxxxxxxxx
Thread-index: AQHQO9r+eYytAN0KQk2hT3fviZ9vYQ==
Thread-topic: xfs_repair fails with corrupt dinode
Hi,

I have a corrupted xfs filesytem. When I try to repair I get this error :
disconnected inode 1427919, moving to lost+found
corrupt dinode 1427919, extent total = 1, nblocks = 0.  This is a bug.
Please capture the filesystem metadata with xfs_metadump and
report it to xfs@xxxxxxxxxxxx
cache_node_purge: refcount was 1, not zero (node=0x1f6c19620)

fatal error -- 117 - couldn't iget disconnected inode

I captured metada with xfs_metadump but the file is 1,4Gb long. I can't send it 
to you.

I printed inode1427919 :

xfs_db> inode 1427919
xfs_db> print
core.magic = 0x494e
core.mode = 0100644
core.version = 2
core.format = 2 (extents)
core.nlinkv2 = 1
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 1144
core.gid = 1144
core.flushiter = 19
core.atime.sec = Fri Jan  9 21:50:25 2015
core.atime.nsec = 132886289
core.mtime.sec = Wed Oct 15 16:28:33 2014
core.mtime.nsec = 166360243
core.ctime.sec = Wed Oct 15 16:28:33 2014
core.ctime.nsec = 166360243
core.size = 2536
core.nblocks = 1
core.extsize = 0
core.nextents = 1
core.naextents = 0
core.forkoff = 0
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 837444118
next_unlinked = null
u.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,320660992,1,0]

What could I do for solve this problem.

Thanks in advance.

Philippe
<Prev in Thread] Current Thread [Next in Thread>