xfs
[Top] [All Lists]

Re: XFS corruption on 3ware RAID6-volume

To: Erik Gulliksson <erik@xxxxxxxxxxxxxx>
Subject: Re: XFS corruption on 3ware RAID6-volume
From: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Wed, 23 Feb 2011 16:23:16 +0100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <AANLkTinsfwr5E7KkffwWOweWJLCmaLnLdtYA4g_m--b0@xxxxxxxxxxxxxx>
Organization: Intellique
References: <AANLkTinWByYooMnPL7BryPowDexBeiHJdh3aVh+fdm-a@xxxxxxxxxxxxxx> <20110223154651.54f0a8dc@xxxxxxxxxxxxxxxxxxxx> <AANLkTinsfwr5E7KkffwWOweWJLCmaLnLdtYA4g_m--b0@xxxxxxxxxxxxxx>
Le Wed, 23 Feb 2011 16:01:09 +0100
Erik Gulliksson <erik@xxxxxxxxxxxxxx> écrivait:

> Hi Emmanuel,
> 
> Thanks for your prompt reply.
> 
> On Wed, Feb 23, 2011 at 3:46 PM, Emmanuel Florac
> <eflorac@xxxxxxxxxxxxxx> wrote:
> >
> > What firmware version are you using?
> >
> > ( tw_cli /cX show firmware )
> 
> # tw_cli /c0 show firmware
> /c0 Firmware Version = FE9X 4.10.00.007
> 

OK so this is the latest, or close.

> 
> >
> > Augh. That sounds pretty bad. What does " tw_cli /cX/uY show all"
> > look like?
> 
> Yes, it is bad - a decision has been made to replace these disks with
> "enterprise"-versions (without TLER/ERC problems etc).

Typical error, alas. Save a couple hundred euros with cheap drives, to
store terabytes of data worth a lot.

> Tw_cli produces
> this output for the volume:
> 
> # tw_cli /c0/u0 show all
> /c0/u0 status = OK
> /c0/u0 is not rebuilding, its current state is OK
> /c0/u0 is not verifying, its current state is OK
> /c0/u0 is initialized.
> /c0/u0 Write Cache = on
> /c0/u0 Read Cache = Intelligent
> /c0/u0 volume(s) = 1
> /c0/u0 name = xxx
> /c0/u0 serial number = yyy
> /c0/u0 Ignore ECC policy = off
> /c0/u0 Auto Verify Policy = off
> /c0/u0 Storsave Policy = protection
> /c0/u0 Command Queuing Policy = on
> /c0/u0 Rapid RAID Recovery setting = all
> /c0/u0 Parity Number = 2
> 
> Unit     UnitType  Status         %RCmpl  %V/I/M  Port  Stripe
> Size(GB)
> ------------------------------------------------------------------------
> u0       RAID-6    OK             -       -       -     256K
> 12572.8 

So the RAID array looks OK, the RAID controller doesn't report any
particular problem. You said it was reported as 0 K. Where did you see
0 K reported?

What gives "dmesg | grep 3w-9xxx" ? and "tw_cli alarms" ? Was the
filesystem under heavy write when the problem occured ?

I'd start with launching a RAID verify, to detect and correct possible
on-disk coherency problems (it can't hurt anyway):

tw_cli /c0/u0 start verify

Then "tail -f /var/log/messages | grep 3w-9xxx" ...

I suppose that there are no problems to be discovered. Most probably
IOs to the array were lost because of the bus reset.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |   <eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>