xfs
[Top] [All Lists]

Re: [PATCH] libxfs: stop caching inode structures

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH] libxfs: stop caching inode structures
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 9 Feb 2012 13:01:36 -0500
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20120208051126.GF20305@dastard>
References: <20120207182228.GA18801@xxxxxxxxxxxxx> <20120208051126.GF20305@dastard>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Feb 08, 2012 at 04:11:26PM +1100, Dave Chinner wrote:
> Ok, so what does it do to the speed of phase6 and phase7 of repair?
> How much CPU overhead does this add to every inode lookup done in
> these phases?

I'm away from my test system, but on the tons of inodes filesystems it
actually slightly improved their speed, probably because the box
was swapping less, or we spent less time in inode cache doing cache
misses as we'd never actually have the inode we care about cached.

The reason why the individual inode cache here doesn't work is because
we only every touched inodes in phase7 if we are going to modify them
and write them out, so we absolutely need the backing buffer anyway.

I can't see how phase6 benefits from the logical inode cache either,
given it's structure:

 - in phase 6a we iterate over each inode in the incore inode tree,
   and if it's a directory check/rebuild it
 - phase6b then updates the "." and ".." entries for directories
   that need, which means we require the backing buffers.
 - phase6c moves disconnected inodes to lost_found, which again needs
   the backing buffer to actually do anything.

In short there is no code in repair that benefits from doing logical
inode caching.

<Prev in Thread] Current Thread [Next in Thread>