[Top] [All Lists]

Re: Failure growing xfs with linux 3.10.5

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Failure growing xfs with linux 3.10.5
From: Michael Maier <m1278468@xxxxxxxxxxx>
Date: Wed, 14 Aug 2013 17:16:14 +0200
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130814054332.GA12779@dastard>
References: <52073905.8010608@xxxxxxxxxxx> <5207D9C4.7020102@xxxxxxxxxxx> <5209126F.5020204@xxxxxxxxxxx> <20130813005414.GT12779@dastard> <520A48C4.6060801@xxxxxxxxxxx> <20130814054332.GA12779@dastard>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0 SeaMonkey/2.20
Dave Chinner wrote:
> On Tue, Aug 13, 2013 at 04:55:00PM +0200, Michael Maier wrote:
>> Dave Chinner wrote:
>>> On Mon, Aug 12, 2013 at 06:50:55PM +0200, Michael Maier wrote:
>>>> Meanwhile, I faced another problem on another xfs-file system with linux
>>>> 3.10.5 which I never saw before. During writing a few bytes to disc, I
>>>> got "disc full" and the writing failed.
>>>> At the same time, df reported 69G of free space! I ran xfs_repair -n and
>>>> got:
>>>> xfs_repair -n /dev/mapper/raid0-daten2
>>>> Phase 1 - find and verify superblock...
>>>> Phase 2 - using internal log
>>>>         - scan filesystem freespace and inode maps...
>>>> sb_ifree 591, counted 492
>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^
>>>> What does this mean? How can I get rid of it w/o loosing data? This file
>>>> system was created a few days ago and never resized.
>>> Superblock inode counting is lazy - it can get out of sync in after
>>> an unclean shutdown, but generally mounting a dirty filesystem will
>>> result in it being recalculated rather than trusted to be correct.
>>> So there's nothing to worry about here.
>> When will it be self healed?
> that depends on whether there's actually a problem. Like I said in
> the part you snipped off - if you ran xfs_repair -n on filesystem
> that needs log recovery that accounting difference is expected.

I know, that option -n doesn't do anything. It was intended, because
xfs_repair destroyed a lot of data when applied at the other problem I
have _and_ it repaired nothing at the same time! The other problem isn't
fixed at all although xfs_repair was used w/o -n.

That's why am I asking if a real xfs_repair will fix this problem in
this case _w/o_ loosing any data.

>> I still can see it today after 4 remounts!
> See what?

The problem
sb_ifree 591, counted 492

>> This is strange and I can't use the free space, which I need! How can it
>> be forced to be repaired w/o data loss?
> The above is complaining about a free inode count mismatch, not a
> problem about free space being wrong. What problem are you actually
> having?

The application, which wanted to write a few bytes gets a "disk full"
error although df -h reports 69GB of free space.


<Prev in Thread] Current Thread [Next in Thread>