[Top] [All Lists]

Re: mount failed after xfs_growfs beyond 16 TB

To: christian.guggenberger@xxxxxxxxxxxxxxxxxxxxxxxx
Subject: Re: mount failed after xfs_growfs beyond 16 TB
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Fri, 03 Nov 2006 09:54:51 -0600
Cc: David Chinner <dgc@xxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20061103154448.GA26647@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <20061102172608.GA27769@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <454A3B28.7010405@xxxxxxxxxxx> <20061103093203.GA18010@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20061103123418.GP8394166@xxxxxxxxxxxxxxxxx> <20061103154448.GA26647@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird (Macintosh/20060909)
Christian Guggenberger wrote:
xfs_db mojo.... ;)

Note - no guarantee this will work - practise on an expendable
sparse loopback filessytem image by making a filesystem of slightly less
than 16TB then growing it to corrupt it the same way and then fixing it up

Once it's corrupted, unmount and run xfs_db in expert mode.
The superblock:

blocksize = 4096
dblocks = 18446744070056148512
agblocks = 84976608
agcount = 570

An AG is ~43.5GB, so 570 AGs is 24.8TB. It's to big, and
we will only shrink by whole AGs. Hence we have to correct
agcount and dblocks.

isn't the AG size 'agblocks * blocksize' == ~324 GB here ?

got further input on a secondray superblock form the colleague:
looks more reasonable, I'd say. Is there a way to manually recover sb0
from sb1 ?

you can copy it over field-by-field.... not sure if there's an easier way.


<Prev in Thread] Current Thread [Next in Thread>