xfs
[Top] [All Lists]

Re: mount: Structure needs cleaning

To: MikeJeezy <forums@xxxxxxxxxxxx>
Subject: Re: mount: Structure needs cleaning
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Mon, 27 Feb 2012 11:49:02 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <33393429.post@xxxxxxxxxxxxxxx>
References: <33393100.post@xxxxxxxxxxxxxxx> <4F49B693.4080309@xxxxxxxxxxxxxxxxx> <33393429.post@xxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sat, Feb 25, 2012 at 11:22:29PM -0800, MikeJeezy wrote:
> 
> 
> On 02/25/2012 10:35pm, Stan Hoeppner wrote:
> >Can you run xfs_check on the filesystem to determine if a freespace
> >tree is corrupted (post the output if it is), then run xfs_repair
> >to rebuild them?"
> 
> Thank you for responding.  This is a 24/7 production server and I did not
> anticipate getting a response this late on a Saturday, so I panicked quite
> frankly, and went ahead and ran "xfs_repair -L" on both volumes.

The only reason for running xfs-repair -L is if you cannot mount the
filesystem to replay the log. i.e. on a shutdown like this, the
usual process is:

<shutdown>
umount <dev>
mount <dev>
umount <dev>
xfs_repair <dev>

The only reason for needing to run "xfs-repair -L <dev>" is if the
mount after the shutdown fails to run log recovery.

> I can now
> mount the volumes and everything looks okay as far as I can tell.  There
> were only 2 files in the "lost+found" directory after the repair.  Does that
> mean only two files were lost?  Is there any way to tell how many files were
> lost?

YOu can only find out by looking at what the output of xfs_repair
told you about trashing inodes/directories.

> >This corruption could have happened a long time ago in the past, and
> >it may simply be coincidental that you've tripped over this at
> >roughly the same time you upgraded the kernel.
> 
> It would be nice to find out why this happened.  I suspect it is as you
> suggested, previous corruption and not a hardware issue, because I have
> other volumes mounted to other VM's that are attached to the same SAN
> controller / RAID6 Array... and they did not have any issues - only this one
> VM.
> 
> >So, run "xfs_check /dev/sde1" and post the output here.  Then await
> >further instructions.  
> 
> Can I still do this (or anything) to help uncover any causes or is it too
> late?  I have also run yum update on the server because it was out of date.

Too late. As it is, xfs-check is deprecated. use "xfs_repair -n
<dev>" to check a filesystem for errors without modifying/fixing
anything.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>