On Wed, Sep 18, 2013 at 04:48:45PM -0500, Mark Tinguely wrote:
> On 09/08/13 20:33, Dave Chinner wrote:
> >From: Dave Chinner<dchinner@xxxxxxxxxx>
> >CPU overhead of buffer lookups dominate most metadata intensive
> >workloads. The thing is, most such workloads are hitting a
> >relatively small number of buffers repeatedly, and so caching
> >recently hit buffers is a good idea.
> >Add a hashed lookaside buffer that records the recent buffer
> >lookup successes and is searched first before doing a rb-tree
> >lookup. If we get a hit, we avoid the expensive rbtree lookup and
> >greatly reduce the overhead of the lookup. If we get a cache miss,
> >then we've added an extra CPU cacheline miss into the lookup.
> Low cost, possible higher return. Idea looks good to me.
> What happens in xfs_buf_get_map() when we lose the xfs_buf_find() race?
What race is that?
> I don't see a removal of the losing lookaside entry inserted in the
Why would we want to do removal of an entry if some other lookup
aliases to the same slot and doesn't match? If the buffer we are
looking up isn't in cache at all, then we've just removed something
that has had previous cache hits and is still in cache without
inserting anything in it's place. If the buffer is in the cache,
then we do an insert once we've found it. i.e. there is no need to
do removal on lookup miss...