| To: | xfs@xxxxxxxxxxx |
|---|---|
| Subject: | XFS corrupt after RAID failure and resync |
| From: | David Raffelt <david.raffelt@xxxxxxxxxxxxx> |
| Date: | Tue, 6 Jan 2015 16:39:19 +1100 |
| Delivered-to: | xfs@xxxxxxxxxxx |
|
Hi All,
I have 7 drives in a RAID6 configuration with a XFS partition (running Arch linux). Recently two drives dropped out simultaneously, and a hot spare immediately synced successfully so that I now have 6/7 drives up in the array. After a reboot (to replace the faulty drives) the XFS file system would not mount. Note that I had to perform a hard reboot since the server hung on shutdown. When I try to mount I get the following error: mount: mount /dev/md0 on /export/data failed: Structure needs cleaning I have tried to perform:Âxfs_repair /dev/md0 And I get the following output: Phase 1 - find and verify superblock... couldn't verify primary superblock - bad magic number !!! attempting to find secondary superblock... .............................................................................. ..............................................................................              [many lines like this] .............................................................................. .............................................................................. found candidate secondary superblock...unable to verify superblock, continuing .............................................................................. Note that it has been scanning for many hours and hasÂlocatedÂseveral secondary superblocks with the same error. It is till scanning however based on other posts I'm guessing it will not be successful. To investigate the superblock info I used xfs_db and the magic number looks ok: sudo xfs_db /dev/md0 xfs_db> sb xfs_db> p magicnum = 0x58465342 blocksize = 4096 dblocks = 3662666880 rblocks = 0 rextents = 0 uuid = e74e5814-3e0f-4cd1-9a68-65d9df8a373f logstart = 2147483655 rootino = 1024 rbmino = 1025 rsumino = 1026 rextsize = 1 agblocks = 114458368 agcount = 32 rbmblocks = 0 logblocks = 521728 versionnum = 0xbdb4 sectsize = 4096 inodesize = 512 inopblock = 8 fname = "\000\000\000\000\000\000\000\000\000\000\000\000" blocklog = 12 sectlog = 12 inodelog = 9 inopblog = 3 agblklog = 27 rextslog = 0 inprogress = 0 imax_pct = 5 icount = 4629568 ifree = 34177 fdblocks = 362013500 frextents = 0 uquotino = 0 gquotino = null qflags = 0 flags = 0 shared_vn = 0 inoalignmt = 2 unit = 128 width = 640 dirblklog = 0 logsectlog = 12 logsectsize = 4096 logsunit = 4096 features2 = 0xa bad_features2 = 0xa features_compat = 0 features_ro_compat = 0 features_incompat = 0 features_log_incompat = 0 crc = 0 (unchecked) pquotino = 0 lsn = 0 Any help or suggestions at this point would be much appreciated! Is my only option to try a repair -L? Thanks in advance, Dave |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: What is a recommended XFS sector size for hybrid (512e) advanced format hard drives?, Hillel Lubman |
|---|---|
| Next by Date: | XFS corrupt after RAID failure and resync, David Raffelt |
| Previous by Thread: | having topchina Power bank pricelist?, Vivian |
| Next by Thread: | Re: XFS corrupt after RAID failure and resync, Stefan Ring |
| Indexes: | [Date] [Thread] [Top] [All Lists] |