Dear all,
I have a 12 x 500Gb RAID-5 hardware RAID array on an ARECA 1130-ML
controller. There is one single partition on it, exported as /dev/sdc1.
This configuration used to work fine for 4 months.
Then the computer crashed a couple of times, and led to a situation where
xfs_check /dev/sdc1 output is:
xfs_check: unexpected XFS SB magic number 0x45464920
xfs_check: size check failed
xfs_check: read failed: Invalid argument
xfs_check: data size check failed
xfs_check: failed to alloc 58876353264 bytes: Cannot allocate memory
I also checked the RAID, and seemingly the controller is fine; I can
communicate with it, all 12 disks are visible, their SMART status is
OK, the RAID-5 is reported to be in 'normal' condition, etc.
[root@localhost ~]# xfs_db -r /dev/sdc1
xfs_db: unexpected XFS SB magic number 0x45464920
xfs_db: size check failed
xfs_db: read failed: Invalid argument
xfs_db: data size check failed
xfs_db: failed to alloc 58876353264 bytes: Cannot allocate memory
--------------------
[root@localhost ~]# xfs_repair -nv /dev/sdc1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!
attempting to find secondary superblock...
................................................................................
...
....................found candidate secondary superblock...
unable to verify superblock, continuing...
...
....................found candidate secondary superblock...
verified secondary superblock...
would write modified primary superblock
Primary superblock would have been modified.
Cannot proceed further in no_modify mode.
Exiting now.
----------------
I would very much appreciate advice on how to proceed in such situation.
I worry that xfs_repair will repair, but may leave a mess that is hard
to recover. I am hoping there may be a safer way.
Best regards
Gaspar
|