[Top] [All Lists]

Re: Failure growing xfs with linux 3.10.5

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Failure growing xfs with linux 3.10.5
From: Michael Maier <m1278468@xxxxxxxxxxx>
Date: Tue, 13 Aug 2013 16:55:00 +0200
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130813005414.GT12779@dastard>
References: <52073905.8010608@xxxxxxxxxxx> <5207D9C4.7020102@xxxxxxxxxxx> <5209126F.5020204@xxxxxxxxxxx> <20130813005414.GT12779@dastard>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0 SeaMonkey/2.20
Dave Chinner wrote:
> On Mon, Aug 12, 2013 at 06:50:55PM +0200, Michael Maier wrote:
>> Meanwhile, I faced another problem on another xfs-file system with linux
>> 3.10.5 which I never saw before. During writing a few bytes to disc, I
>> got "disc full" and the writing failed.
>> At the same time, df reported 69G of free space! I ran xfs_repair -n and
>> got:
>> xfs_repair -n /dev/mapper/raid0-daten2
>> Phase 1 - find and verify superblock...
>> Phase 2 - using internal log
>>         - scan filesystem freespace and inode maps...
>> sb_ifree 591, counted 492
>> ^^^^^^^^^^^^^^^^^^^^^^^^^
>> What does this mean? How can I get rid of it w/o loosing data? This file
>> system was created a few days ago and never resized.
> Superblock inode counting is lazy - it can get out of sync in after
> an unclean shutdown, but generally mounting a dirty filesystem will
> result in it being recalculated rather than trusted to be correct.
> So there's nothing to worry about here.

When will it be self healed? I still can see it today after 4 remounts!
This is strange and I can't use the free space, which I need! How can it
be forced to be repaired w/o data loss?

kind regards,

<Prev in Thread] Current Thread [Next in Thread>