xfs
[Top] [All Lists]

Re: [PATCH v2 00/11] xfs: introduce the free inode btree

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: [PATCH v2 00/11] xfs: introduce the free inode btree
From: Brian Foster <bfoster@xxxxxxxxxx>
Date: Wed, 13 Nov 2013 12:55:38 -0500
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20131113161711.GA14300@xxxxxxxxxxxxx>
References: <1384353427-36205-1-git-send-email-bfoster@xxxxxxxxxx> <20131113161711.GA14300@xxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0
On 11/13/2013 11:17 AM, Christoph Hellwig wrote:
> I have to admit that I haven't followed this series as closely as I
> should, but could you summarize the performance of it?  What workloads
> does it help most, what workloads does it hurt and how much?
> 

Hi Christoph,

Sure... this work is based on Dave's write up here:

http://oss.sgi.com/archives/xfs/2013-08/msg00344.html

... where he also explains the general idea, which is basically to
improve inode allocation performance on a large fs' that happens to be
sparsely populated with inode chunks with free inodes. We do this by
creating a second inode btree that only tracks inode chunks with at
least one free inode.

So far I've only really ad hoc tested the focused case: create millions
of inodes on an fs, strategically remove an inode towards the end of the
ag such that there is one existing inode chunk with a single free inode,
then go and create a file.

The current implementation hits the fallback search in xfs_dialloc_ag()
(the for loop prior to 'alloc_inode:') and degrades to a couple seconds
or so (on my crappy single spindle setup). Alternatively, the finobt in
this scenario contains a single record with the chunk with the free
inode, so the record lookup and allocation time is basically constant
(e.g., we eliminate the need to ever run the full ag scan).

Sorry I don't have more specific numbers at the moment. Most of my
testing so far has been the focused case and general reliability
testing. I'll need to find some hardware worthy of performance testing,
particularly to check for any potential negative effects of managing the
secondary tree. I suppose I wouldn't expect it to be much worse than the
overhead of managing two free space trees, but we'll see.
Thoughts/suggestions appreciated, thanks.

Brian


<Prev in Thread] Current Thread [Next in Thread>