need help how to debug xfs crash issue xfs_iunlink_remove: xfs_inotobp() returned error 22

符永涛 yongtaofu at gmail.com
Wed Apr 10 00:36:39 CDT 2013


[/mnt/xfsd/lost+found]# pwd
/mnt/xfsd/lost+found
[ /mnt/xfsd/lost+found]# ls -l
total 0
-rw-r--r-- 1 root root 0 Feb  1 19:18 6235944
[ /mnt/xfsd/lost+found]# sudo getfattr -m . -d -e hex 6235944
[ /mnt/xfsd/lost+found]#


2013/4/10 符永涛 <yongtaofu at gmail.com>

> Here's the file info in lost+found:
>
> [ lost+found]# pwd
> /mnt/xfsd/lost+found
> [ lost+found]# ls -l
> 总用量 4
> ---------T 1 root root 0 2月  28 15:42 3097
> ---------T 1 root root 0 2月  28 15:16 6169
> [root at 10.15.136.67 lost+found]# sudo getfattr -m . -d -e hex 6169
> [root at 10.15.136.67 lost+found]# sudo getfattr -m . -d -e hex 3097
> # file: 3097
> trusted.afr.ec-data-client-2=0x000000000000000000000000
> trusted.afr.ec-data-client-3=0x000000000000000000000000
> trusted.afr.ec-data1-client-2=0x000000000000000000000000
> trusted.afr.ec-data1-client-3=0x000000000000000000000000
> trusted.gfid=0x2bb701d327c44bb0af78d69e89f192a4
> trusted.glusterfs.dht.linkto=0x65632d64617461312d7265706c69636174652d3400
>
> trusted.glusterfs.quota.b8e8b3ef-0268-40af-93b6-257c4c7ef17a.contri=0x0000000004249000
>
>
> It seems they're some link files for glusterfs dht xlator.
>
> Thank you.
>
>
> 2013/4/10 Eric Sandeen <sandeen at sandeen.net>
>
>> On 4/9/13 10:18 AM, 符永涛 wrote:
>> > The servers are back to service now and It's hard to run xfs_repair. It
>> always happen bellow is the xfs_repair log when it happens on another
>> server several days ago.
>>
>> ...
>>
>> > 第二步
>> > repair的log
>> >
>> > sh-4.1$ sudo xfs_repair /dev/glustervg/glusterlv
>> > Phase 1 - find and verify superblock…
>> > Phase 2 - using internal log
>> >         - zero log…
>> >         - scan filesystem freespace and inode maps…
>> > agi unlinked bucket 0 is 4046848 in ag 0 (inode=4046848)
>> > agi unlinked bucket 5 is 2340485 in ag 0 (inode=2340485)
>> > agi unlinked bucket 6 is 2326854 in ag 0 (inode=2326854)
>> > agi unlinked bucket 8 is 1802120 in ag 0 (inode=1802120)
>> > agi unlinked bucket 14 is 495566 in ag 0 (inode=495566)
>> > agi unlinked bucket 16 is 5899536 in ag 0 (inode=5899536)
>> > agi unlinked bucket 19 is 4008211 in ag 0 (inode=4008211)
>> > agi unlinked bucket 21 is 4906965 in ag 0 (inode=4906965)
>> > agi unlinked bucket 23 is 2022231 in ag 0 (inode=2022231)
>> > agi unlinked bucket 24 is 1626200 in ag 0 (inode=1626200)
>> > agi unlinked bucket 25 is 938585 in ag 0 (inode=938585)
>> > agi unlinked bucket 30 is 4226526 in ag 0 (inode=4226526)
>> > agi unlinked bucket 34 is 4108962 in ag 0 (inode=4108962)
>> > agi unlinked bucket 37 is 1740389 in ag 0 (inode=1740389)
>> > agi unlinked bucket 39 is 247399 in ag 0 (inode=247399)
>> > agi unlinked bucket 40 is 6237864 in ag 0 (inode=6237864)
>> > agi unlinked bucket 43 is 3404331 in ag 0 (inode=3404331)
>> > agi unlinked bucket 45 is 2092717 in ag 0 (inode=2092717)
>> > agi unlinked bucket 48 is 4041008 in ag 0 (inode=4041008)
>> > agi unlinked bucket 50 is 1459762 in ag 0 (inode=1459762)
>> > agi unlinked bucket 56 is 852024 in ag 0 (inode=852024)
>>
>> If this machine is still around in similar state, can you do a
>>
>> # find /path/to/mount -inum $INODE_NUMBER
>>
>> for the inode numbers above, and see what files they are?
>> That might give us a clue about what operations were happening
>> to them.  Dumping the gluster xattrs on those files
>> might also be interesting.  Just guesses here, but it'd be a
>> little more data.
>>
>> (if this is an old repair, maybe doing the same for your most
>> recent incident would be best)
>>
>> Thanks,
>> -Eric
>>
>> >         - found root in ode chunk
>> > Phase 3 - for each AG…
>> >         - scan and clear agi unlinked lists…
>> >         - process known inodes and perform inode discovery…
>> >         - agno = 0
>> > 7f8220be6700: Badness in key lookup (length)
>> > bp=(bno 123696, len 16384 bytes) key=(bno 123696, len 8192 bytes)
>>
>> (FWIW the above warnings look like an xfs_repair bug, not related)
>>
>> -Eric
>>
>>
>
>
> --
> 符永涛
>



-- 
符永涛
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20130410/7616ce5a/attachment-0001.html>


More information about the xfs mailing list