xfs
[Top] [All Lists]

Re: [RFC 0/3] Convert XFS inode hashes to radix trees

To: David Chinner <dgc@xxxxxxx>
Subject: Re: [RFC 0/3] Convert XFS inode hashes to radix trees
From: Shailendra Tripathi <stripathi@xxxxxxxxx>
Date: Tue, 14 Nov 2006 17:09:03 -0800
Cc: xfs-dev@xxxxxxx, xfs@xxxxxxxxxxx
In-reply-to: <20061003060610.GV3024@xxxxxxxxxxxxxxxxx>
References: <20061003060610.GV3024@xxxxxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird 1.5.0.8 (X11/20061025)
Hi David,
I regret for making comments and questions on this quite late (somehow I missed to email). It does appear to me that using this approach can potentially help in cluster hash list related manipulations.
However, this appears (to me) to be at the cost of regular inode lookup.

As of now, each of the hash buckets have their own lock. This helps in not making the xfs_iget operations hot. I have not seen of xfs_iget anywhere on the top in my profiling of Linux for SPECFS. With this code, the number of hash buckets can be appropriately sized (based upon memory availability).

However, it appears to be that radix tree (even with 15) can become a bottleneck. Lets assume that there are 600K inodes on a reasonably big end system and assuming fare distribution, each of the radix tree will have 600K/15 ~ 40K inodes per hash tree. Insertion and deletion to the list have to take writer_lock and
given their frequency, both readers (lookups) and writers will be affected.

That means, if one tree is locked for insertion or deletion, remaining 40K inodes will be just serialized. However, in current design, by sacrificing little extra memory, we can allocate more hash buckets and eventually the locked down inodes can be made pretty small. My knowledge on radix tree is little limited, but I think, increasing the number of trees would be much more costly in memory terms. Given less memory usage and performance, I tend to believe that hash
table is more scalable than radix tree for inode tables.
Have you done any performance testing with these patches. I am quite curious to know the results. If not, may be I can
try do some perf. testing with these changes albeit on a old kernel tree.
      Am I missing something here ? Please let me know.

Thanks and Regards,
Shailendra

David Chinner wrote:
One of the long standing problems with XFS on large machines and
filesystems is the sizing of the inode cache hashes used by XFS to
index the xfs_inode_t structures. The mount option ihashsize became
a necessity because the default calculations simply can't get it
right for all situations.

On top of that, as we increase the size of the inode hash and cache
more inodes, the inode cluster hash becomes the limiting factor,
especially when we have sparse cluster population.  The result of
this is that we can always get to the point where either the ihash
or the chash is a scalability or performance limitation.

The following three patches replace the hashes with a more scalable
solution that should not require tweaking in most situations.

I chose a radix tree to replace the hash chains because of a neat
alignment of XFS inode structures and the kernel radix tree fanout.
XFS allocates inodes in clusters of 64 inodes and the radix tree
keeps 64 sequential entries per node.  That means all for the inodes
in a cluster will always sit in the same node of the radix tree.

Using this relationship, we completely remove the need for the
cluster hash to track clusters because we can use a gang lookup on
the radix tree to search for an existing inode in the cluster in an
efficient manner.

The following three patches sit on top of the recently posted
i_flags cleanup patch.
(http://marc.theaimsgroup.com/?l=linux-xfs&m=115985254820322&w=2)

The first patch replaces the inode hash chains with radix trees.  A
single radix tree with a read/write lock does not provide enough
parallelism to prevent performance regressions under simultanenous
create/unlink workloadds, so we hash the inode clusters into
different radix trees each with their own read/write lock. The
default is to create (2*ncpus)-1 radix trees up to a maximum of 15.
At this point I have left the ihashsize mount option alone but
limited the maximum number it can take to 128. if you specify more
than 128 (i.e. everyone currently using this mount option), it
falls back to the default.

The second patch introduces a per-cluster object lock for chaining
the inodes in the cluster together (for xfs_iflush()). The inode
chain is currently locked by cluster hash chain lock, so we need
some other method of locking if we are to remove the cluster hash
altogether.

The third patch removes the cluster hash and replaces it with some
masking and a radix tree gang lookup.

Overall, the patchset removes more than 200 lines of code from the
xfs inode caching and lookup code and provides more consistent
scalability for large numbers of cached inodes. The only down side
is that it limits us to 32 bit inode numbers of 32 bit platforms due
to the way the radix tree uses unsigned longs for it's indexes

Comments, thoughts, etc are welcome.

Cheers,

Dave.


<Prev in Thread] Current Thread [Next in Thread>