xfs
[Top] [All Lists]

Re: [PATCH 03/27] xfs: use write_cache_pages for writeback clustering

To: Johannes Weiner <jweiner@xxxxxxxxxx>
Subject: Re: [PATCH 03/27] xfs: use write_cache_pages for writeback clustering
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Mon, 11 Jul 2011 13:24:44 -0400
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, Wu Fengguang <fengguang.wu@xxxxxxxxx>, Christoph Hellwig <hch@xxxxxxxxxxxxx>, Mel Gorman <mgorman@xxxxxxx>, Rik van Riel <riel@xxxxxxxxxx>, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>, "linux-mm@xxxxxxxxx" <linux-mm@xxxxxxxxx>
In-reply-to: <20110711172050.GA2849@xxxxxxxxxx>
References: <20110629140109.003209430@xxxxxxxxxxxxxxxxxxxxxx> <20110629140336.950805096@xxxxxxxxxxxxxxxxxxxxxx> <20110701022248.GM561@dastard> <20110701041851.GN561@dastard> <20110701093305.GA28531@xxxxxxxxxxxxx> <20110701154136.GA17881@localhost> <20110704032534.GD1026@dastard> <20110706151229.GA1998@xxxxxxxxxx> <20110708095456.GI1026@dastard> <20110711172050.GA2849@xxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, Jul 11, 2011 at 07:20:50PM +0200, Johannes Weiner wrote:
> > Yet the file pages on the active list are unlikely to be dirty -
> > overwrite-in-place cache hot workloads are pretty scarce in my
> > experience. hence writeback of dirty pages from the active LRU is
> > unlikely to be a problem.
> 
> Just to clarify, I looked at this too much from the reclaim POV, where
> use-once applies to full pages, not bytes.
> 
> Even if you do not overwrite the same bytes over and over again,
> issuing two subsequent write()s that end up against the same page will
> have it activated.
> 
> Are your workloads writing in perfectly page-aligned chunks?

Many workloads do, given that we already tell them our preferred
I/O size through struct stat, which alway is the page size or larger.

That won't help with workloads having to write in small chunksizes.
The performance critical ones using small chunksizes usually use
O_(D)SYNC, so pages will be clean after the write returned to userspace.

<Prev in Thread] Current Thread [Next in Thread>