I have a 3ware hardware RAID array on RHEL Linux that I recently had a
drive fail from. When I replaced the drive and rebuilt the array, I ran
into a lot of *bad sectors* warnings from the 3ware controller. After the
rebuild was done, the single partition was lost. So I just had one big
partition, so I remade the partition and was able to mount the XFS
filesystem. (whew). But, of course, I really need to run xfs_repair.
When I do with the '-n' option. The first message is this...
# xfs_repair -n /dev/sda1
Phase 1 - find and verify superblock...
sb root inode value 128 inconsistent with calculated value 256
would reset superblock root inode pointer to 256
sb realtime bitmap inode 129 inconsistent with calculated value 257
would reset superblock realtime bitmap ino pointer to 257
sb realtime summary inode 130 inconsistent with calculated value 258
would reset superblock realtime summary ino pointer to 258
Phase 2 - using internal log
- scan filesystem freespace and inode maps...
bad magic # 0x414256d3 in btcnt block 216/2
block (216,77215) already used, state 7
block (216,639792) already used, state 2
block (216,657396) already used, state 2
block (216,657397) already used, state 2
block (216,657398) already used, state 2
block (216,657399) already used, state 2
block (216,657400) already used, state 2
block (216,657401) already used, state 2
So my question is if I ran without the -n, would my entire filesystem be
trashed? Well, it sort of is now, but I only get 'unknown error 990' on
those files and directories that are bad. and kernel traces like:
Feb 8 08:44:09 pircsds3 kernel: Filesystem "sd(8,1)": corrupt inode
3808429058 (btree). Unmount and run xfs_repair.
Feb 8 08:44:09 pircsds3 kernel: Filesystem "sd(8,1)": XFS internal error
xfs_iformat_btree at line 761 of file xfs_inode.c. Caller 0xe0a5a2a9
Any thoughts? I am running Scientific Linux's 2.4.21-27.0.2.EL.XFSsmp