[Top] [All Lists]

Re: [PATCH 02/27] xfs: remove the unused ilock_nowait codepath in writep

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: [PATCH 02/27] xfs: remove the unused ilock_nowait codepath in writepage
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 30 Jun 2011 11:26:58 +1000
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20110630001525.GU561@dastard>
References: <20110629140109.003209430@xxxxxxxxxxxxxxxxxxxxxx> <20110629140336.717434334@xxxxxxxxxxxxxxxxxxxxxx> <20110630001525.GU561@dastard>
User-agent: Mutt/1.5.20 (2009-06-14)
On Thu, Jun 30, 2011 at 10:15:25AM +1000, Dave Chinner wrote:
> On Wed, Jun 29, 2011 at 10:01:11AM -0400, Christoph Hellwig wrote:
> > wbc->nonblocking is never set, so this whole code has been unreachable
> > for a long time.  I'm also not sure it would make a lot of sense -
> > we'd rather finish our writeout after a short wait for the ilock
> > instead of cancelling the whole ioend.
> The problem that the non-blocking code is trying to solve is only
> obvious when the disk subsystem is fast enough to drive the flusher
> thread to being CPU bound.
> e.g. when you have a disk subsystem doing background writeback
> 10GB/s and the flusher thread is put to sleep for 50ms while we wait
> for the lock, it can now only push 9.5GB/s. If we just move on, then
> we'll spend that 50ms doing useful work on another dirty inode
> rather than sleeping onthis one and hence maintaining a 10GB/s
> background write rate.
> I'd suggest that the only thing that should be dropped is the
> wbc->nonblocking check. Numbers would be good to validate that this
> is still relevant, but I don't have a storage subsystem with enough
> bandwidth to drive a flusher thread to being CPU bound...

I just confirmed that I don't have a fast enough storage system to
test this - the flusher thread uses only ~15% of a CPU @ 800MB/s
writeback, so I'd need somewhere above 5GB/s of throughput to see
any sort of artifact from this change....


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>