xfs
[Top] [All Lists]

Re: [PATCH] libxfs: increase hash chain depth when we run out of slots

To: Eric Sandeen <sandeen@xxxxxxxxxx>
Subject: Re: [PATCH] libxfs: increase hash chain depth when we run out of slots
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 17 Sep 2009 14:09:31 -0400
Cc: xfs-oss <xfs@xxxxxxxxxxx>, Tomek Kruszona <bloodyscarion@xxxxxxxxx>, Riku Paananen <riku.paananen@xxxxxxxxxxx>
In-reply-to: <4AB25E78.8050001@xxxxxxxxxx>
References: <4AB25E78.8050001@xxxxxxxxxx>
User-agent: Mutt/1.5.19 (2009-01-05)
On Thu, Sep 17, 2009 at 11:06:16AM -0500, Eric Sandeen wrote:
> A couple people reported xfs_repair hangs after
> "Traversing filesystem ..." in xfs_repair.  This happens
> when all slots in the cache are full and referenced, and the
> loop in cache_node_get() which tries to shake unused entries
> fails to find any - it just keeps upping the priority and goes
> forever.
> 
> This can be worked around by restarting xfs_repair with
> -P and/or "-o bhash=<largersize>" for older xfs_repair.
> 
> I started down the path of increasing the number of hash buckets
> on the fly, but Barry suggested simply increasing the max allowed
> depth which is much simpler (thanks!)
> 
> Resizing the hash lengths does mean that cache_report ends up with
> most things in the "greater-than" category:
> 
> ...
> Hash buckets with  23 entries      3 (  3%)
> Hash buckets with  24 entries      3 (  3%)
> Hash buckets with >24 entries     50 ( 85%)
> 
> but I think I'll save that fix for another patch unless there's
> real concern right now.
> 
> I tested this on the metadump image provided by Tomek.

How large is that image?  I really think we need to start collecting
these images for regression testing.


The patch looks good to me,


Reviewed-by: Christoph Hellwig <hch@xxxxxx>

<Prev in Thread] Current Thread [Next in Thread>