Hi,
Recent similliar thread pointed out that this might be a problem within
linux ide subsystem. So i thought i might report my expirience.
I have 8 80gb maxtor disks on 3ware 6800. Running bonnie++ with some
wild parameters like -s 16384 -n 4096:1048576:2048:25 popped up this
after a couple of hours: 'cmn_err level 4 Filesystem "md(9,8)": corrupt
dinode 60680385, extent total = 1, nblocks = 0.
Unmount and run xfs_repair.'.
So i did. xfs_repair told me:
# xfs_repair /dev/md8
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
correcting nblocks for inode 60680385, was 0 - counted 68
- agno = 4
- agno = 5
... snip ...
- agno = 130
- agno = 131
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- clear lost+found (if it exists) ...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
... snip ...
- agno = 130
- agno = 131
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- ensuring existence of lost+found directory
- traversing filesystem starting at / ...
- traversal finished ...
- traversing all unattached subtrees ...
- traversals finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
Any comments on this? If someone wants, i can also try to run such
bonnie++ on ext2 and maybe reiserfs. Well, at least the results would be
interesting :)
--
Jure Pecar
|