On Thu, 2011-04-07 at 11:57 +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
>
> On of the problems with the current inode flush at ENOSPC is that we
> queue a flush per ENOSPC event, regardless of how many are already
> queued. Thi can result in hundreds of queued flushes, most of
> which simply burn CPU scanned and do no real work. This simply slows
> down allocation at ENOSPC.
>
> We really only need one active flush at a time, and we can easily
> implement that via the new xfs_syncd_wq. All we need to do is queue
> a flush if one is not already active, then block waiting for the
> currently active flush to complete. The result is that we only ever
> have a single ENOSPC inode flush active at a time and this greatly
> reduces the overhead of ENOSPC processing.
>
> On my 2p test machine, this results in tests exercising ENOSPC
> conditions running significantly faster - 042 halves execution time,
> 083 drops from 60s to 5s, etc - while not introducing test
> regressions.
>
> This allows us to remove the old xfssyncd threads and infrastructure
> as they are no longer used.
Looks good. You got rid of a useless log force as well.
Reviewed-by: Alex Elder <aelder@xxxxxxx>
> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> Reviewed-by: Christoph Hellwig <hch@xxxxxx>
|