xfs
[Top] [All Lists]

Re: [PATCH 6/7] mm: vmscan: Throttle reclaim if encountering too many di

To: Mel Gorman <mgorman@xxxxxxx>
Subject: Re: [PATCH 6/7] mm: vmscan: Throttle reclaim if encountering too many dirty pages under writeback
From: Wu Fengguang <fengguang.wu@xxxxxxxxx>
Date: Tue, 16 Aug 2011 22:06:52 +0800
Cc: Linux-MM <linux-mm@xxxxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, XFS <xfs@xxxxxxxxxxx>, Dave Chinner <david@xxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxxxxxxxxx>, Johannes Weiner <jweiner@xxxxxxxxxx>, Jan Kara <jack@xxxxxxx>, Rik van Riel <riel@xxxxxxxxxx>, Minchan Kim <minchan.kim@xxxxxxxxx>
In-reply-to: <1312973240-32576-7-git-send-email-mgorman@xxxxxxx>
References: <1312973240-32576-1-git-send-email-mgorman@xxxxxxx> <1312973240-32576-7-git-send-email-mgorman@xxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
Mel,

I tend to agree with the whole patchset except for this one.

The worry comes from the fact that there are always the very possible
unevenly distribution of dirty pages throughout the LRU lists. This
patch works on local information and may unnecessarily throttle page
reclaim when running into small spans of dirty pages.

One possible scheme of global throttling is to first tag the skipped
page with PG_reclaim (as you already do). And to throttle page reclaim
only when running into pages with both PG_dirty and PG_reclaim set,
which means we have cycled through the _whole_ LRU list (which is the
global and adaptive feedback we want) and run into that dirty page for
the second time.

One test scheme would be to read/write a sparse file fast with some
average 5:1 or 10:1 or whatever read:write ratio. This can effectively
spread dirty pages all over the LRU list. It's a practical test since
it mimics the typical file server workload with concurrent downloads
and uploads.

Thanks,
Fengguang

<Prev in Thread] Current Thread [Next in Thread>