xfs
[Top] [All Lists]

Re: filesystem shrinks after using xfs_repair

To: xfs@xxxxxxxxxxx
Subject: Re: filesystem shrinks after using xfs_repair
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 30 Jul 2010 14:40:13 +0200
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>
In-reply-to: <20100730102943.GA24106@xxxxxxxxxxxxx>
Organization: it-management http://it-management.at
References: <20100726034545.GE655@dastard> <201007301223.12134@xxxxxx> <20100730102943.GA24106@xxxxxxxxxxxxx>
User-agent: KMail/1.12.4 (Linux/2.6.34.1-zmi; KDE/4.3.5; x86_64; ; )
On Freitag, 30. Juli 2010 Christoph Hellwig wrote:
> On Fri, Jul 30, 2010 at 12:23:08PM +0200, Michael Monnerie wrote:
> > On Freitag, 30. Juli 2010 Christoph Hellwig wrote:
> > > Recent enough kernel work fine with filesystems that inode64 was
> > > used on even if it's not specified anymore.
> >
> >  
> > Really? Since when exactly? That would be a nice feature. If we
> > can  define it clearly, I could put that on the FAQ.
> 
> Linux 2.6.35 will be the first kernel with the bugfixes for this to
> work.

Hihi, *rofl*. That's what developers mean by "recent enough kernel": It 
will be "in the next release to come". :-)

> > But how does it truncate the numbers >int32 and avoid collisions?
> 
> It doesn't.  Existing inodes won't nessecarily fit into 32-bits, but
> no new inodes above it will be allocated.

OK, sounds simple. I wrote two new FAQ entries:

http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F

Could "all who know better than me" please verify if the information is 
correct?

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

Attachment: signature.asc
Description: This is a digitally signed message part.

<Prev in Thread] Current Thread [Next in Thread>