xfs
[Top] [All Lists]

Re: xfs corrupted

To: Stefanita Rares Dumitrescu <katmai@xxxxxxxxxxxxxxx>
Subject: Re: xfs corrupted
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 17 Oct 2013 09:16:36 +1100
Cc: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <525E9550.80008@xxxxxxxxxxxxxxx>
References: <1381826507281-35009.post@xxxxxxxxxxxxx> <20131015203434.2f336fd8@xxxxxxxxxxxxxx> <525D8D67.2090301@xxxxxxxxxxxxxxx> <20131015213447.40d05ea0@xxxxxxxxxxxxxx> <525D9E3B.5040507@xxxxxxxxxxxxxxx> <20131015202640.GR4446@dastard> <525E9550.80008@xxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Oct 16, 2013 at 03:32:00PM +0200, Stefanita Rares Dumitrescu wrote:
> Quick update:
> 
> The xfsprogs from the centos6 yum are newer and they don't use that
> much memory, however i got 2 segfaults and the process stopped.
> 
> I cloned the xfsprogs git and i am running it now with the new 15 gb
> swap that i created, and this is a monster in memory usage.
> 
> Pretty bit of discrepancy.

Not if the centos 6 version is segfaulting before it gets to the
stage that consumes all the memory. From your subsequent post, you
have 76 million inodes in the filesystem. If xfs_repair has to track
all those inodes as part of the recovery (e.g. you lost the root
directory), then it has to index them all in memory.

Most people have no idea how much disk space this amount of metadata
consumes and hence why xfs_repair might run out of memory.  For
example, an newly created 100TB filesystem with 50 million zero
length files in it consumes 28GB of space in metadata.
You've got 50% more inodes than that, so you've xfs_repair is
probably walking in excess of 40GB of metadata in your filesystem.

If a significant portion of that metadata is corrupt, then repair
needs to hold both the suspicious metadata and a cross reference
index in memory to be able to rebuild it all. Hence when you have
etns of gigabytes of metadata, xfs_repair can need tens of GB of RAM
to be able to repair it. There's simply no easy way around this.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>