XFS corruptions

Emmanuel Florac eflorac at intellique.com
Mon Nov 30 12:45:54 CST 2015


Le Mon, 30 Nov 2015 18:06:55 +0000
Sandeep Patel <spatel at omnifone.com> écrivait:

> Hi Emmanuel,
> 
> Thanks for the response.
> 
>  [root at gc003b ~]# xfs_info /dev/sdb
> meta-data=/dev/sdb               isize=512    agcount=52,
> agsize=268435455 blks =                       sectsz=4096  attr=2,
> projid32bit=1 =                       crc=0
> data     =                       bsize=4096   blocks=13916176384,
> imaxpct=1 =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks,
> lazy-count=1 realtime =none                   extsz=4096   blocks=0,
> rtextents=0

Looks plain defaults... you didn't apply any customization, did you?

> 
> We have 18 nodes each with 2 of these arrays and we are seeing this
> across the board.

Hum, strange, are you running the latest RAID firmware on the
controllers?

Does this happen more often when the array is rebuilding, or verifying,
or when the system is under heavy IO? Or does it happen just completely
at random?
 
> I have updated the xfsprogs to 3.1.11-1.0.6.el6.x86_64 which is the
> latest version available on our yum repo.

If you're not afraid of running binaries from unknown source, here's a
4.2.0 version I've built recently:
http://update.intellique.com/pub/xfs_repair-4.2.0.gz
 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac at intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------



More information about the xfs mailing list