Hi,
I have an xfs filesystem (on lvm on a 3ware raid1) that might be
slightly corrupted.
xfs_check doesn't find any problems and exits with a nice exitcode of 0.
xfs_repair complains a few times about
| LEAFN node level is 1 inode <number> bno = 8388608
during phases 3 and 4.
It lists 5 inodes, and all of them are directories.
This looks to be the very same thing as described in June already:
http://oss.sgi.com/archives/linux-xfs/2005-06/msg00049.html
Russell Howe on the #xfs channel on OFTC suggested I also attach the
xfs_db output of one such inode:
xfs_db> inode 97055
xfs_db> print
core.magic = 0x494e
core.mode = 040750
core.version = 1
core.format = 2 (extents)
core.nlinkv1 = 2
core.uid = 100
core.gid = 105
core.flushiter = 680
core.atime.sec = Tue Sep 27 16:27:58 2005
core.atime.nsec = 490530254
core.mtime.sec = Thu Sep 22 09:53:25 2005
core.mtime.nsec = 483402945
core.ctime.sec = Thu Sep 22 09:53:25 2005
core.ctime.nsec = 483402945
core.size = 24576
core.nblocks = 8
core.extsize = 0
core.nextents = 8
core.naextents = 0
core.forkoff = 0
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.gen = 0
next_unlinked = null
u.bmx[0-7] = [startoff,startblock,blockcount,extentflag] 0:[0,6350,1,0]
1:[1,13826,1,0] 2:[2,43526,1,0] 3:[3,19019,1,0] 4:[4,15787,1,0] 5:[5,13426,1,0]
6:[8388608,13639,1,0] 7:[16777216,1243,1,0]
xfs_db>
If there isn't anything you would like me to try with the filesystem I
will go ahead and do what the poster in the mail from June suggested,
moving the contents of the corrupted directories to newly created
directories and rmdir the offending ones.
Cheers,
Peter
--
PGP signed and encrypted | .''`. ** Debian GNU/Linux **
messages preferred. | : :' : The universal
| `. `' Operating System
http://www.palfrader.org/ | `- http://www.debian.org/
|