> That looks a little odd, repair can't seem to decide how many
> blocks that inode should have, waffling between 8 and 0.
>
> does the -v option give you any more interesting info?
rei:/# xfs_repair -v /dev/hdb1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
zero_log: head block 6687 tail block 6687
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
correcting nblocks for inode 44179985, was 0 - counted 8
correcting nblocks for inode 44180026, was 0 - counted 8
- agno = 3
correcting nblocks for inode 55046098, was 0 - counted 2
- agno = 4
correcting nblocks for inode 72672635, was 0 - counted 8
- agno = 5
- agno = 6
- agno = 7
- agno = 8
correcting nblocks for inode 138982095, was 0 - counted 10
correcting nblocks for inode 138982136, was 0 - counted 8
- agno = 9
- agno = 10
- agno = 11
correcting nblocks for inode 193438914, was 0 - counted 7
- agno = 12
correcting nblocks for inode 205774557, was 0 - counted 22
- agno = 13
- agno = 14
correcting nblocks for inode 241413320, was 0 - counted 9
- agno = 15
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- clear lost+found (if it exists) ...
- clearing existing "lost+found" inode
- marking entry "lost+found" to be deleted
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
data fork in regular inode 44179985 claims used block 3013232
correcting nblocks for inode 44179985, was 8 - counted 0
data fork in regular inode 44180026 claims used block 6158960
correcting nblocks for inode 44180026, was 8 - counted 0
- agno = 3
data fork in regular inode 55046098 claims used block 4061808
correcting nblocks for inode 55046098, was 2 - counted 0
- agno = 4
data fork in regular inode 72672635 claims used block 5110384
correcting nblocks for inode 72672635, was 8 - counted 0
- agno = 5
- agno = 6
- agno = 7
- agno = 8
data fork in regular inode 138982095 claims used block 9304688
correcting nblocks for inode 138982095, was 10 - counted 0
data fork in regular inode 138982136 claims used block 10353264
correcting nblocks for inode 138982136, was 8 - counted 0
- agno = 9
- agno = 10
- agno = 11
data fork in regular inode 193438914 claims used block 12450416
correcting nblocks for inode 193438914, was 7 - counted 0
- agno = 12
data fork in regular inode 205774557 claims used block 13498992
correcting nblocks for inode 205774557, was 22 - counted 0
- agno = 13
- agno = 14
data fork in regular inode 241413320 claims used block 15596144
correcting nblocks for inode 241413320, was 9 - counted 0
- agno = 15
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- ensuring existence of lost+found directory
- traversing filesystem starting at / ...
rebuilding directory inode 128
- traversal finished ...
- traversing all unattached subtrees ...
- traversals finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
corrupt dinode 44179985, extent total = 1, nblocks = 0. Unmount and run
xfs_repair.
fatal error -- couldn't map inode 44179985, err = 990
> how big is this filesystem, btw, and what does xfs_info output
> look like.
rei:/# df | grep archive2
/dev/hdb1 58600560 44969692 13630868 77% /archive2
rei:/# xfs_info /archive2
meta-data=/archive2 isize=256 agcount=16, agsize=916081
blks
= sectsz=512
data = bsize=4096 blocks=14657296,
imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=7156, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
That's a whole lot of copy'n'pasting ;) .. Any ideas on how to sort this
from the information above? The filesystem is still accessible as I'm
burning the stuff on it off now, but I'd rather not keep using it if
there's something wrong yet I don't really want to start it from scratch
:(
Cheers,
Daniel Palmer
|