xfs
[Top] [All Lists]

Re: xfs_growfs - question

To: "Nathan Scott" <nathans@xxxxxxx>
Subject: Re: xfs_growfs - question
From: "JaniD++" <djani22@xxxxxxxxxxxxx>
Date: Mon, 24 Apr 2006 13:14:26 +0200
Cc: <linux-xfs@xxxxxxxxxxx>
References: <00c301c666c4$813f72c0$1600a8c0@dcccs> <20060424085255.B1655705@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
----- Original Message ----- 
From: "Nathan Scott" <nathans@xxxxxxx>
To: "JaniD++" <djani22@xxxxxxxxxxxxx>
Cc: <linux-xfs@xxxxxxxxxxx>
Sent: Monday, April 24, 2006 12:52 AM
Subject: Re: xfs_growfs - question


> On Sun, Apr 23, 2006 at 12:55:56PM +0200, JaniD++ wrote:
> > Hello, list,
> >
> > I plan to use xfs_growfs to grow my 8 TB fs to 13 TB.
> > Is there a way to "save" some information during the xfs_growfs works to
be
> > able to "undo" the operation if it fails?
>
> xfsdump? ;)  No, there is no growfs-specific undo.
>
> > I will reach some limitation about 32/64 bit , and >2TB devices.
> > My 8TB fs is full, and contains a lot of important data, what i cannot
> > backup.
>
> Is using multiple filesystems an option?

What do you mean exactly?

This is one big fs from 4x 3.3 TB nbd device, wich is currently limited to
4x 2.0 TB
I need this space in one mount point.

(This is a free web storage, with dynamically growing, shrinkink, and moving
users.)


>
> > It is safe to run it with strace?
>
> Yes.
>
> > Phase 6 - check inode connectivity...
> >         - resetting contents of realtime bitmap and summary inodes
> >         - ensuring existence of lost+found directory
> >         - traversing filesystem starting at / ...
> > rebuilding directory inode 128
> >         - traversal finished ...
> >
> > Always the #128, on all of my xfs filesystems!
>
> 128 is likely to be the root inode, and what you're seeing there is
> the "lost+found" directory being re-linked in.  If you rmdir it in-
> between xfs_repair runs, this message should disappear.

Well, ok. :-)
I will check it.

Thanks,
Janos

>
> cheers.
>
> -- 
> Nathan


<Prev in Thread] Current Thread [Next in Thread>