Buffalo LS-Q4.0 Raid 5 XFS errors
Eric Sandeen
sandeen at sandeen.net
Thu Mar 29 23:46:01 CDT 2012
On 3/29/12 8:09 PM, Kirk Anderson wrote:
> I really appreciate your help to this point. I do not have the xfs_irecover
> command available. Do you think there is an rpm for it that would be
> compatible with the flavor of Linux this box is running?
xfs_irecover isn't packaged up anywhere AFAIK. If/when Christoph's patch
makes it upstream, it'll make its way into packages.
Until then you'd need to get it built yourself...
-Eric
> root at LS-QLF55:~# uname -a
> Linux LS-QLF55 2.6.22.7 #395 Thu May 21 22:24:49 JST 2009 armv5tejl unknown
>
> If so, where may I find it? Since I do not have a backup of this it sounds
> like I have nothing to lose in trying the xfs_repair or should I hold out
> for xfs_irecover? Please let me know. Thanks, Kirk
>
> -----Original Message-----
> From: Dave Chinner [mailto:david at fromorbit.com]
> Sent: Thursday, March 29, 2012 6:52 PM
> To: Kirk Anderson
> Cc: xfs at oss.sgi.com
> Subject: Re: Buffalo LS-Q4.0 Raid 5 XFS errors
>
> On Thu, Mar 29, 2012 at 06:29:14PM -0500, Kirk Anderson wrote:
>> These two matched.
>>
>> root at LS-QLF55:~# dd if=/dev/sda6 bs=512 count=1 2> /dev/null | hexdump
>> -C > dump_sda6.txt root at LS-QLF55:~# dd if=/dev/md2 bs=512 count=1 2>
>> /dev/null | hexdump -C > dump_md2.txt root at LS-QLF55:~# diff
>> dump_sda6.txt dump_md2.txt root at LS-QLF55:~#
>>
>> root at LS-QLF55:~# dd if=/dev/md2 bs=512 count=4 2> /dev/null | hexdump
>> -C
>> 00000000 58 46 53 42 00 00 10 00 00 00 00 00 2b 4e 92 c0
>> |XFSB........+N..|
>> 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> |................|
>> 00000020 1b 5a c5 ad cc e3 40 11 87 4d f5 8e 9b f8 37 c0
>> |.Z.... at ..M....7.|
>> 00000030 00 00 00 00 20 00 00 07 00 00 00 00 00 00 01 00 |....
>> ...........|
>> 00000040 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02
>> |................|
>> 00000050 00 00 00 30 01 5a 74 a0 00 00 00 20 00 00 00 00 |...0.Zt....
>> ....|
>> 00000060 00 00 80 00 3d 84 10 00 01 00 00 10 00 00 00 00
>> |....=...........|
>> 00000070 00 00 00 00 00 00 00 00 0c 0c 08 04 19 00 00 19
>> |................|
>> 00000080 00 00 00 00 00 04 50 c0 00 00 00 00 00 00 02 ce
>> |......P.........|
>> 00000090 00 00 00 00 0b ba 45 ab 00 00 00 00 00 00 00 00
>> |......E.........|
>> 000000a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> |................|
>> 000000b0 00 00 00 00 00 00 00 02 00 00 00 10 00 00 00 30
>> |...............0|
>> 000000c0 00 0c 10 00 00 00 10 00 00 00 00 00 00 00 00 00
>> |................|
>> 000000d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> |................|
>> *
>> 00000800
>
> The AGF, AGI and AGFL have all been zeroed. Something has overwritten them.
> Your filesysetm is likely to be toast.
>
>>
>>
>> root at LS-QLF55:~# xfs_db -c "sb 0" -c p -c "agf 0" -c p -c "agi 0" -c p
>> /dev/md2
>> cache_node_purge: refcount was 1, not zero (node=0xb1698)
>> xfs_db: cannot read root inode (22)
>
> And that means the zeroing has extended well into the filesystem, and your
> root directory has been lost. There's really not that much that reapir can
> do for you at this point execpt make a mess. There is no AGI left to find
> where in-use inodes might live to recover them, and the directory structure
> cannot be used to find them, either, so I think the only thing you can do
> now is start on disaster recovery.
>
> Christoph had a patch to xfs_repair that allowed it to run xfs_irecover like
> functionality - I don't think he ever posted it, so you might just have to
> find the original xfs_irecover utility and make use of that to extract
> whatever you can from the busted filesystem.
>
> Other than that, I think that there's little we can do to help you recover
> the filesystem intact at this point....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
>
> _______________________________________________
> xfs mailing list
> xfs at oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
More information about the xfs
mailing list