On Thu, Nov 23, 2006 at 11:40:38AM -0500, Justin Piszcz wrote:
> Here is the info:
>
> Script started on Thu Nov 23 09:55:38 2006
> 1;36mroot@1[~]#0;39m xfs_repair -n /dev/hda2
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
> - scan filesystem freespace and inode maps...
> - found root inode chunk
> Phase 3 - for each AG...
> - scan (but don't clear) agi unlinked lists...
> - process known inodes and perform inode discovery...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> data fork in regular inode 939526080 claims used block 114661
> bad data fork in inode 939526080
> would have cleared inode 939526080
......
> data fork in regular inode 939526111 claims used block 114692
> bad data fork in inode 939526111
> would have cleared inode 939526111
Looks like half an inode cluster has been trashed in some way (32
consecutive inodes are bad). All the following errors appear to be a
direct result of these inodes being trashed. Are you using 256 byte
inodes? if it is, that means that the 32 inodes would have been
written in a single buffer, and so that buffer write would be
suspect.
FWIW, Irix XFS actually validates inode buffers before they get
written out, so if it was a bad write that might have been caught on
irix. Unfortunately, we don't do those checks in Linux (most of the
hooks are there, just not used) so it is possible that some kind of
memory corruption has lead to this damaged state on disk.
Seeing as you've repair the filesystem, we can't really get a dump
of the raw inode data to find out exactly how they were corrupted.
Unless you have a copy of the fs around somewhere?
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
|