OK, responding to myself, Eric showed me the mkfs output (I need threaded
email!) and yes you are over 1 Tbyte, but your inode size has been bumped
too, so there should not be an overflow problem behind this. I would still
like to see the other info. I do not have anywhere near the amount of disk
you have to replicate a similar setup, but I will try a smaller md config
on mongo and see if I cannot get a similar corruption here.
Steve
> > Hi Steve,
> >
> > Steve Lord wrote:
> > >
> > > Hmm, can you run xfs_repair -n on the filesystem (when unmounted) I
> > > suspect there is something corrupted in there. You are shutting down
> > > because xfs is cancelling a transaction which has already modified
> > > metadata - this does not happen during normal operation.
> >
> > Just to remind, mongo.pl always recreates the file system on /dev/md0
> > (with 'mkfs.xfs -l size=32768b -f /dev/md0') before it starts creating
> > files.
> > Here is the output of 'xfs_repair -n /dev/md0':
> >
>
> OK, that looked pretty toasty - I see this is a pretty big device too,
> how big ? Can you also send the version of mkfs you are using, and
> the raid configuration file. I was not aware you were using md until
> now. Also, how many mongo threads are you running in parallel?
>
> The reason I ask this is that you need a recent mkfs command if you are
> using an fs bigger than 1 Tbyte, the xfs inode can overflow 32 bits if
> you do not bump the inode size, a recent mkfs will do this automatically.
>
> Steve
>
>
>
|