xfs
[Top] [All Lists]

Re: [PATCH v4 00/20] xfsprogs: introduce the free inode btree

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH v4 00/20] xfsprogs: introduce the free inode btree
From: Brian Foster <bfoster@xxxxxxxxxx>
Date: Tue, 27 May 2014 08:06:22 -0400
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20140526224033.GP18954@dastard>
References: <1399465319-65066-1-git-send-email-bfoster@xxxxxxxxxx> <20140526224033.GP18954@dastard>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, May 27, 2014 at 08:40:33AM +1000, Dave Chinner wrote:
> On Wed, May 07, 2014 at 08:21:39AM -0400, Brian Foster wrote:
> > Hi all,
> > 
> > Here's v4 of the finobt series for xfsprogs. Patches 1-10 are unchanged
> > as they are based on the corresponding kernel patches, which have now
> > been merged.
> > 
> > v4 includes some fairly isolated fixes for mkfs and repair based on
> > review feedback for v3:
> > 
> >     http://oss.sgi.com/archives/xfs/2014-04/msg00239.html
> > 
> > Some concern was raised over xfs_repair performance based on the
> > implementation of patch 17 in v3, so I have run a few repair tests on
> > largish filesystems. Tests involved creating a large number of inodes on
> > a 1TB 4xraid0, freeing a random percentage to populate the finobt and
> > running xfs_repair (e.g., no actual corruptions). xfs_repair was run
> > normally (with these patches) and with a change to skip the finobt
> > processing via an xfs_sb_version_hasfinobt() hack. The tests were run on
> > a 16xcpu, 32GB RAM server.
> 
> I still have some concerns about this simply based on the algorithm
> and that it will come back an bite us eventually, but for the moment
> I think you've done enough to show that it's not going to an
> immediate issue.
> 

Fair enough, it's certainly not the most efficient thing. ;) I'm just
hesitant to go and add more complex infrastructure and likely trade off
the resource consumption here without a clear cost/benefit win for that
approach. Very simple tests suggest we'd just be adding memory overhead.
But of course, repair isn't always going to be running against clean and
fairly new filesystems. Perhaps we'll see some different characteristics
when this hits some more interesting situations and that will help
determine how to optimize this algorithm. If anything comes up worth
testing in this regard, I'm happy to dig into it...

> I haven't seen anything else that needs fixing or causes problems,
> so I'm going to merge it for 3.2.1.
> 

Sounds good, thanks!

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>