[Top] [All Lists]

Re: mount: Structure needs cleaning

To: MikeJeezy <forums@xxxxxxxxxxxx>
Subject: Re: mount: Structure needs cleaning
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sat, 25 Feb 2012 22:35:31 -0600
Cc: xfs@xxxxxxxxxxx
In-reply-to: <33393100.post@xxxxxxxxxxxxxxx>
References: <33393100.post@xxxxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20120216 Thunderbird/10.0.2
On 2/25/2012 9:15 PM, MikeJeezy wrote:
> I have two 2TB xfs volumes and earlier today the /var/log/messages shows
> "xfs_force_shutdown" after many errors (attached) 
> http://old.nabble.com/file/p33393100/var-log-message.txt var-log-message.txt
> .  Are there any options to try before running "xfs_repair -L"?   The
> volumes contain several millions files so that I my last resort.  I'm a
> novice at best in Linux.

Googling "XFS_WANT_CORRUPTED_RETURN at line 280" turns up a whole lot of
information on this.  This is an Oct 2008 response to another user with
this problem, from XFS developer Dave Chinner, one of the resident
experts on this list:

"The freespace btrees are getting out of sync for some reason.

That is, when we go to allocate an extent, we have to update two
free space btrees. This shutdown:

> XFS internal error XFS_WANT_CORRUPTED_RETURN at line 280 of file
fs/xfs/xfs_alloc.c.  Caller 0xf88e0018

Indicates the extent being allocated was not found in one of the
two trees.

This corruption could have happened a long time ago in the past, and
it may simply be coincidental that you've tripped over this at
roughly the same time you upgraded the kernel.

Can you run xfs_check on the filesystem to determine if a freespace
tree is corrupted (post the output if it is), then run xfs_repair
to rebuild them?"

So, run "xfs_check /dev/sde1" and post the output here.  Then await
further instructions.  Don't flush the log as that's probably not the
problem.  Doing so will delete pending metadata changes and could cause
you more headaches.


<Prev in Thread] Current Thread [Next in Thread>