xfs
[Top] [All Lists]

[RFC, PATCH 0/3] Kill async inode writeback

To: xfs@xxxxxxxxxxx
Subject: [RFC, PATCH 0/3] Kill async inode writeback
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Sat, 2 Jan 2010 14:03:33 +1100
Currently we do background inode writeback on demand from many
different places - xfssyncd, xfsbufd, the bdi writeback threads and when
pushing the AIL. The result is that inodes can be pushed at any time
and there is little to no locality to the IO patterns results from
such writeback. Indeed, we can have completing writebacks occurring
which only serves to slow down writeback.

The idea behind this series is to make metadata buffers get
written from xfsbufd via the delayed write queue rather than than from
all these other places. All the other places do is make the buffers
delayed write so that the xfsbufd can issue them.

This means that inode flushes can no longer happen asynchronously,
but we still need a method for ensuring timely dispatch of buffers
that we may be waiting for IO completion on. To do this, we allow
delayed write buffers to be "promoted" in the delayed write queue.
This effectively short-cuts the aging of the buffers, and combined
with a demand flush of the xfsbufd we push all aged and promoted
buffers out at the same time.

Combine this with sorting the delayed write buffers to be written
into disk offset order before dispatch, and we vastly improve the
IO patterns for metadata writeback. IO is issued from one place and
in a disk/elevator friendly order.

Perf results on a debug XFS build (means allocation patterns are
variable, so runtimes are also a bit variable):

Untar 2.6.32 kernel tarball, sync, then remove:

                Untar+sync      rm -rf
xfs-dev:          25.2s          13.0s
xfs-dev-delwri:   22.5s           9.1s

4 processes each creating 100,000, five byte files in separate
directories concurrently, then 4 processes removing a directory each
concurrently.

                create          rm -rf
xfs-dev:         8m32s           4m10s
xfs-dev-delwri:  4m55s           3m42s

There is still followup work to be done on the buffer sorting to
make it more efficient, but overall the concept appears to be solid
based on the improvements in sustained small file create rates.

<Prev in Thread] Current Thread [Next in Thread>