xfs
[Top] [All Lists]

Re: [PATCH 0/5] Per superblock shrinkers V2

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH 0/5] Per superblock shrinkers V2
From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Date: Thu, 27 May 2010 13:32:23 -0700
Cc: linux-kernel@xxxxxxxxxxxxxxx, linux-fsdevel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, xfs@xxxxxxxxxxx
In-reply-to: <1274777588-21494-1-git-send-email-david@xxxxxxxxxxxxx>
References: <1274777588-21494-1-git-send-email-david@xxxxxxxxxxxxx>
On Tue, 25 May 2010 18:53:03 +1000
Dave Chinner <david@xxxxxxxxxxxxx> wrote:

> This series reworks the filesystem shrinkers. We currently have a
> set of issues with the current filesystem shrinkers:
> 
>         1. There is an dependency between dentry and inode cache
>            shrinking that is only implicitly defined by the order of
>            shrinker registration.
>         2. The shrinkers need to walk the superblock list and pin
>            the superblock to avoid unmount races with the sb going
>            away.
>         3. The dentry cache uses per-superblock LRUs and proportions
>            reclaim between all the superblocks which means we are
>            doing breadth based reclaim. This means we touch every
>            superblock for every shrinker call, and may only reclaim
>            a single dentry at a time from a given superblock.
>         4. The inode cache has a global LRU, so it has different
>            reclaim patterns to the dentry cache, despite the fact
>            that the dentry cache is generally the only thing that
>            pins inodes in memory.
>         5. Filesystems need to register their own shrinkers for
>            caches and can't co-ordinate them with the dentry and
>            inode cache shrinkers.

Nice description, but...  it never actually told us what the benefit of
the changes are.  Presumably some undescribed workload had some
undescribed user-visible problem.  But what was that workload, and what
was the user-visible problem, and how does the patch affect all this?

Stuff like that.

<Prev in Thread] Current Thread [Next in Thread>