[Top] [All Lists]

[Bug 247] New: xfs_force_shutdown

To: xfs-master@xxxxxxxxxxx
Subject: [Bug 247] New: xfs_force_shutdown
From: bugzilla-daemon@xxxxxxxxxxx
Date: Thu, 29 May 2003 19:45:47 -0700
Sender: linux-xfs-bounce@xxxxxxxxxxx

           Summary: xfs_force_shutdown
           Product: Linux XFS
           Version: unspecified
          Platform: IA32
        OS/Version: Linux
            Status: NEW
          Severity: normal
          Priority: Medium
         Component: XFS kernel code
        AssignedTo: xfs-master@xxxxxxxxxxx
        ReportedBy: masanobu.shimura@xxxxxxx

I am using Turbo linux sever 8 with xfs.

I experienced xfs_force_shutdown almost every day while deleting back up 
directory with rm -rf xxxx/ command.

Log Messages show as follows.

May 30 05:21:34 mpzserver2 kernel: xfs_force_shutdown(ide1(22,2),0x8) called 
from line 4065 of file xfs_bmap.c.  Return address = 0xd32a15e1
May 30 05:21:34 mpzserver2 kernel: Corruption of in-memory data detected.  
Shutting down filesystem: ide1(22,2)
May 30 05:21:34 mpzserver2 kernel: Please umount the filesystem, and rectify 
the problem(s)

After unmount device and perform xfs_repair usually show following messages.

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

Then, I remounted device and unmont again and perform xfs_repair to fix it.

Is this considered real memory problem or some bug in xfs file system?
I am using regular DIMM ram without ECC.

Please help me.

Mike Shimura

------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

<Prev in Thread] Current Thread [Next in Thread>