xfs
[Top] [All Lists]

Re: [PATCH 10/10] xfs: don't cache inodes read through bulkstat

To: Ben Myers <bpm@xxxxxxx>
Subject: Re: [PATCH 10/10] xfs: don't cache inodes read through bulkstat
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 16 Mar 2012 09:05:40 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20120315181426.GO7762@xxxxxxx>
References: <1331095828-28742-1-git-send-email-david@xxxxxxxxxxxxx> <1331095828-28742-11-git-send-email-david@xxxxxxxxxxxxx> <20120315181426.GO7762@xxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Mar 15, 2012 at 01:14:26PM -0500, Ben Myers wrote:
> On Wed, Mar 07, 2012 at 03:50:28PM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > When we read inodes via bulkstat, we generally only read them once
> > and then throw them away - they never get used again. If we retain
> > them in cache, then it simply causes the working set of inodes and
> > other cached items to be reclaimed just so the inode cache can grow.
> > 
> > Avoid this problem by marking inodes read by bulkstat as not to be
> > cached and check this flag in .drop_inode to determine whether the
> > inode should be added to the VFS LRU or not. If the inode lookup
> > hits an already cached inode, then don't set the flag. If the inode
> > lookup hits an inode marked with no cache flag, remove the flag and
> > allow it to be cached once the current reference goes away.
> > 
> > Inodes marked as not cached will get cleaned up by the background
> > inode reclaim or via memory pressure, so they will still generate
> > some short term cache pressure. They will, however, be reclaimed
> > much sooner and in preference to cache hot inodes.
> > 
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> > ---
> >  fs/xfs/xfs_iget.c   |    8 ++++++--
> >  fs/xfs/xfs_inode.h  |    4 +++-
> >  fs/xfs/xfs_itable.c |    3 ++-
> >  fs/xfs/xfs_super.c  |   17 +++++++++++++++++
> >  4 files changed, 28 insertions(+), 4 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_iget.c b/fs/xfs/xfs_iget.c
> > index 93fc1dc..20ddb1e 100644
> > --- a/fs/xfs/xfs_iget.c
> > +++ b/fs/xfs/xfs_iget.c
> > @@ -290,7 +290,7 @@ xfs_iget_cache_hit(
> >     if (lock_flags != 0)
> >             xfs_ilock(ip, lock_flags);
> >  
> > -   xfs_iflags_clear(ip, XFS_ISTALE);
> > +   xfs_iflags_clear(ip, XFS_ISTALE | XFS_IDONTCACHE);
> 
> If XFS_IGET_DONTCACHE is set, maybe you don't want to clear
> XFS_IDONTCACHE.

I think that if we get a cache hit, regardless of the access method,
then the inode needs to stay cached for longer. I can't think of a
workload other than repeated xfsdump or xfs_fsr cycles that would
cause this, and in these cases it will only occur if the scans
happen faster than the reclaim period. That sort of workload would
be extremely unusual, but if it is happening then I think we should
treat it as a cached workload rather than an uncached workload
because caching the inodes long term results in better and more
consistent performance.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>