[Top] [All Lists]

Re: Failure growing xfs with linux 3.10.5

To: Michael Maier <m1278468@xxxxxxxxxxx>
Subject: Re: Failure growing xfs with linux 3.10.5
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 13 Aug 2013 10:54:14 +1000
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <5209126F.5020204@xxxxxxxxxxx>
References: <52073905.8010608@xxxxxxxxxxx> <5207D9C4.7020102@xxxxxxxxxxx> <5209126F.5020204@xxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, Aug 12, 2013 at 06:50:55PM +0200, Michael Maier wrote:
> Eric Sandeen wrote:
> > On 8/11/13 2:11 AM, Michael Maier wrote:
> >> Hello!
> >>
> >> I think I'm facing the same problem as already described here:
> >> http://thread.gmane.org/gmane.comp.file-systems.xfs.general/54428
> > 
> > Maybe you can try the tracing Dave suggested in that thread?
> I sent you a trace.
> Meanwhile, I faced another problem on another xfs-file system with linux
> 3.10.5 which I never saw before. During writing a few bytes to disc, I
> got "disc full" and the writing failed.
> At the same time, df reported 69G of free space! I ran xfs_repair -n and
> got:
> xfs_repair -n /dev/mapper/raid0-daten2
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>         - scan filesystem freespace and inode maps...
> sb_ifree 591, counted 492
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> What does this mean? How can I get rid of it w/o loosing data? This file
> system was created a few days ago and never resized.

Superblock inode counting is lazy - it can get out of sync in after
an unclean shutdown, but generally mounting a dirty filesystem will
result in it being recalculated rather than trusted to be correct.
So there's nothing to worry about here.

> Phase 7 - verify link counts...
> No modify flag set, skipping filesystem flush and exiting.

Ok, - it was a no-modify run, so it wouldn't have complained about a
dirty log needing replay. Hence the counters are probably out
because the filesystem has a dirty log.


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>