Power loss causes bad magic number??
Kevin Dual
kevin.dual at gmail.com
Sun Feb 8 00:10:22 CST 2009
On Sat, Feb 7, 2009 at 3:10 PM, Eric Sandeen <sandeen at sandeen.net> wrote:
> kevin.dual at gmail.com wrote:
> > I'm having a very similar problem... My 1TB RAID-5 array formatted with
> XFS assembles, but refuses to mount:
>
> ...
>
> > $ cat /proc/mdstat
> > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> > md0 : active raid5 sdd1[2] sdc1[1] sdb1[0]
> > 976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> >
> > unused devices: <none>
>
> ...
>
>
> > --------------------------------------------------
> > $ sudo xfs_check /dev/md0
> > xfs_check: unexpected XFS SB magic number 0x110812af
> > cache_node_purge: refcount was 1, not zero (node=0x1300220)
> > xfs_check: cannot read root inode (22)
> > cache_node_purge: refcount was 1, not zero (node=0x1300440)
> > xfs_check: cannot read realtime bitmap inode (22)
> > Segmentation fault
>
> hm, ok...
>
> but below you don't expect each component drive to be a consistent xfs
> fs do you? It's not a mirror :)
Actually, I did not expect each component drive to be a consistent xfs file
system, I was just trying to gather as much information as possible. I'm
glad I did because it was running "sudo dd if=/dev/sd* bs=512 count=128
iflag=direct | hexdump -C | grep XFSB" on all my drives that showed which
drive had the xfs info on it and allowed you to suggest that I try creating
the array with that drive first:
$ sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 --assume-clean
/dev/sdc1 /dev/sdd1 /dev/sdb1
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid5 devices=3 ctime=Fri Feb 6 18:01:59 2009
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid5 devices=3 ctime=Fri Feb 6 18:01:59 2009
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid5 devices=3 ctime=Fri Feb 6 18:01:59 2009
Continue creating array? y
mdadm: array /dev/md0 started.
$ sudo mount -t xfs /dev/md0 test
$ cd test
/test$ ls
ALL MY DATA!
So it seems that creating a RAID-5 array in the wrong order and allowing it
to sync does not destroy the data, which is very to know ;P
Thank you very much for your help!
>
> > $ sudo xfs_check /dev/sdb1
> > xfs_check: unexpected XFS SB magic number 0x110812af
> > cache_node_purge: refcount was 1, not zero (node=0x2213220)
> > xfs_check: cannot read root inode (22)
> > bad superblock magic number 110812af, giving up
> >
> > $ sudo xfs_check /dev/sdc1
> > cache_node_purge: refcount was 1, not zero (node=0x2377220)
> > xfs_check: cannot read root inode (22)
> > cache_node_purge: refcount was 1, not zero (node=0x2377440)
> > xfs_check: cannot read realtime bitmap inode (22)
> > Segmentation fault
> >
> > $ sudo xfs_check /dev/sdd1
> > xfs_check: unexpected XFS SB magic number 0x494e41ed
> > xfs_check: size check failed
> > xfs_check: read failed: Invalid argument
> > xfs_check: data size check failed
> > cache_node_purge: refcount was 1, not zero (node=0x24f1c20)
> > xfs_check: cannot read root inode (22)
> > cache_node_purge: refcount was 1, not zero (node=0x24f1d70)
> > xfs_check: cannot read realtime bitmap inode (22)
> > Segmentation fault
> >
>
> none of the above surprises me
>
> > --------------------------------------------------
> > $ sudo xfs_repair -n /dev/md0
> > Phase 1 - find and verify superblock...
> > bad primary superblock - bad magic number !!!
> >
> > attempting to find secondary superblock...
> > ...etc...etc...fail fail fail
>
> Ok so above is the real problem.
>
> again below is not expected to work!
>
> > $ sudo xfs_repair -n /dev/sdb1
> > Phase 1 - find and verify superblock...
> > bad primary superblock - bad magic number !!!
> >
> > attempting to find secondary superblock...
> > ...etc...etc...fail fail fail
> >
> > $ sudo xfs_repair -n /dev/sdc1
> > Phase 1 - find and verify superblock...
> > error reading superblock 17 -- seek to offset 531361234944 failed
> > couldn't verify primary superblock - bad magic number !!!
> >
> > attempting to find secondary superblock...
> > ...found candidate secondary superblock...
> > error reading superblock 17 -- seek to offset 531361234944 failed
> > unable to verify superblock, continuing...
> > ...etc...etc...fail fail fail
> >
> > you know the routine...
> >
> >
> > --------------------------------------------------
> > $ sudo dd if=/dev/md0 bs=512 count=128 iflag=direct | hexdump -C | grep
> XFSB
> > 128+0 records in
> > 128+0 records out
> > 65536 bytes (66 kB) copied, 0.0257556 s, 2.5 MB/s
> >
> > $ sudo dd if=/dev/sdb bs=512 count=128 iflag=direct | hexdump -C | grep
> XFSB
> > 128+0 records in
> > 128+0 records out
> > 65536 bytes (66 kB) copied, 0.0352348 s, 1.9 MB/s
> >
> > $ sudo dd if=/dev/sdc bs=512 count=128 iflag=direct | hexdump -C | grep
> XFSB
> > 00007e00 58 46 53 42 00 00 10 00 00 00 00 00 0e 8e 12 00
> |XFSB............|
>
> ibase=16
> 7E00
> 32256, or 63x512
>
> and sdc was:
>
> > Model: ATA ST3500641AS (scsi)
> > Disk /dev/sdc: 500GB
> > Sector size (logical/physical): 512B/512B
> > Partition Table: msdos
> >
> > Number Start End Size Type File system Flags
> > 1 32.3kB 500GB 500GB primary raid
>
> IOW the normal msdos 63 sectors.
>
> > 128+0 records in
> > 128+0 records out
> > 65536 bytes (66 kB) copied, 0.0386271 s, 1.7 MB/s
> >
> > $ sudo dd if=/dev/sdd bs=512 count=128 iflag=direct | hexdump -C | grep
> XFSB
> > 128+0 records in
> > 128+0 records out
> > 65536 bytes (66 kB) copied, 0.0928554 s, 706 kB/s
> >
> > Looks like /dev/sdc is the only one with any recognizable superblock data
> on it.
> > --------------------------------------------------
> >
> > Now what should I do with all this information? The array assembles
> fine, but the XFS volume seems to be screwed up somehow. Is there any way
> the array could have put itself together wrong then re-synced and corrupted
> all my data?
>
> It seems like maybe it assembled out of order, as if sdc1 should be the
> first drive, since it has the magic at the right place.
>
> Dunno how much damage could have been done, or if you can just try to
> fix the assembly perhaps...?
>
> -Eric
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20090208/700c7e7e/attachment.htm>
More information about the xfs
mailing list