xfs
[Top] [All Lists]

Xfs_force_shutdown on recent XFS CVS

To: <linux-xfs@xxxxxxxxxxx>
Subject: Xfs_force_shutdown on recent XFS CVS
From: "Hardy I.D." <I.D.Hardy@xxxxxxxxxxx>
Date: Thu, 10 Oct 2002 15:55:56 +0100
Cc: <I.D.Hardy@xxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Thread-index: AcJwbNpe/DScCkvnQSydJ2jFDuhK6w==
Thread-topic: Xfs_force_shutdown on recent XFS CVS
Hi,

Yesterday I rebooted a server with a recent XFS CVS (downloaded/compiled
Tuesday of this week; the kernel reports at boot time - 'SGI XFS
CVS-10/08/02:05 with quota, no debug enabled'. Since then the kernel has
shut down the one XFS filesystem on this server:

Oct 10 14:22:10 blue01 kernel: xfs_force_shutdown(md(9,0),0x8) called
from line 1041 of file xfs_trans.c.  Return address = 0xc01d9138
Oct 10 14:22:10 blue01 kernel: Corruption of in-memory data detected.
Shutting down filesystem: md(9,0)
Oct 10 14:22:10 blue01 kernel: Please umount the filesystem, and rectify
the problem(s)

I've rebooted, ran xfs_check (and xfs_repair, which was necessary on the
first occasion, the 2nd time - above the 'xfs_check' was clean) and
remounted the filesystem OK.

Is there a possible problem with this CVS that would explain this, I was
running (and have now reverted back to a XFS CVS kernel a few weeks old
(SGI XFS CVS-09/15/02:17 with quota, no debug enabled). I've not seen
the 'xfs_force_shutdown' problem before but have had repeated system
panics/lock ups. - though I guess it maybe that recent CVS is better
able to trap errors within the XFS code?

The server is a NFS server with no user code running directly on it. The
filesystem in question is a RAID 0 'md' stripe across 2 HW RAID5 units
(~1 Tbyte in size).

Regards and thanks for any suggestions.

Ian Hardy
Information Systems Services
Southampton University
UK


<Prev in Thread] Current Thread [Next in Thread>