[Top] [All Lists]

Re: xfs_force_shutdown after Raid crash

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: xfs_force_shutdown after Raid crash
From: Steffen Knauf <Steffen.Knauf@xxxxxxxxxxxxxx>
Date: Fri, 06 Feb 2009 16:57:21 +0100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20090131105712.GA30061@xxxxxxxxxxxxx>
References: <498376CF.8020806@xxxxxxxxxxxxxx> <20090131105712.GA30061@xxxxxxxxxxxxx>
Reply-to: Steffen.Knauf@xxxxxxxxx
User-agent: Thunderbird (Windows/20081209)

sorry for the delay. I don't know whether it is interesting, but after a xfs_repair, the filesystem could completely rebuild. Thanks Chritoph. I'm a little bit confused about "write back cache" and the "barrier" option. On the RAID Controller "Write Cache" is enabled, "Write Cache Periodic Flush = 5 seconds" and "Write Cache Flush Ratio = 45 Percent". My kernelversion is 2.6.16 (SLES10), so the default should be nobarrier. But i read in the official SGI xfs Training documentation that write Barrier are enabled by default on SLES10.
How can i check if barrier is on or off?. I don't find something in the log.



On Fri, Jan 30, 2009 at 10:53:19PM +0100, Steffen Knauf wrote:

after a raid crash (Raid Controller problem, 3 Disks of the Disk Group were kicked out oft the diskgroup), 2 of 3 partitions (XFS FS) were shutdown immediately.
Perhaps somebody has a idea, what's the best solution (xfs_repair?).

This looks like you were running with a write back cache enabled on the
controller / disks but without barriers.  xfs_repair should be able
to repair the filesystem.  If you're lucky only the freespace-btrees
are corrupted (as in the trace below) as xfs_repair can rebuild them
from scratch.

<Prev in Thread] Current Thread [Next in Thread>