xfs
[Top] [All Lists]

Re: Corruption of in-memory data detected.

To: Marc Schmitt <schmitt@xxxxxxxxxxx>, Eric Sandeen <sandeen@xxxxxxx>, linux-xfs@xxxxxxxxxxx, florin@xxxxxxx
Subject: Re: Corruption of in-memory data detected.
From: Steve Lord <lord@xxxxxxx>
Date: Tue, 23 Oct 2001 17:37:02 -0500
In-reply-to: Message from Steve Lord <lord@sgi.com> of "Tue, 23 Oct 2001 17:26:55 CDT." <200110232226.f9NMQtA13064@jen.americas.sgi.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
OK, responding to myself, Eric showed me the mkfs output (I need threaded
email!) and yes you are over 1 Tbyte, but your inode size has been bumped
too, so there should not be an overflow problem behind this. I would still
like to see the other info. I do not have anywhere near the amount of disk
you have to replicate a similar setup, but I will try a smaller md config
on mongo and see if I cannot get a similar corruption here.

Steve

> > Hi Steve,
> > 
> > Steve Lord wrote:
> > > 
> > > Hmm, can you run xfs_repair -n on the filesystem (when unmounted) I
> > > suspect there is something corrupted in there. You are shutting down
> > > because xfs is cancelling a transaction which has already modified
> > > metadata - this does not happen during normal operation.
> > 
> > Just to remind, mongo.pl always recreates the file system on /dev/md0
> > (with 'mkfs.xfs -l size=32768b -f /dev/md0') before it starts creating
> > files.
> > Here is the output of 'xfs_repair -n /dev/md0':
> > 
> 
> OK, that looked pretty toasty - I see this is a pretty big device too,
> how big ? Can you also send the version of mkfs you are using, and
> the raid configuration file. I was not aware you were using md until
> now. Also, how many mongo threads are you running in parallel?
> 
> The reason I ask this is that you need a recent mkfs command if you are
> using an fs bigger than 1 Tbyte, the xfs inode can overflow 32 bits if
> you do not bump the inode size, a recent mkfs will do this automatically.
> 
> Steve
> 
> 
> 



<Prev in Thread] Current Thread [Next in Thread>