Currently we do background inode writeback on demand from many
different places - xfssyncd, xfsbufd, xfsaild and the bdi writeback
threads. The result is that inodes can be pushed at any time and
there is little to no locality to the IO patterns results from such
writeback. Indeed, we can have completing writebacks occurring which
only serves to slow down writeback.
The idea behind this series is to make metadata buffers get
written from xfsbufd via the delayed write queue rather than than from
all these other places. All the other places do is make the buffers
delayed write so that the xfsbufd can issue them.
This means that inode flushes can no longer happen asynchronously,
but we still need a method for ensuring timely dispatch of buffers
that we may be waiting for IO completion on. To do this, we allow
delayed write buffers to be "promoted" in the delayed write queue.
This effectively short-cuts the aging of the buffers, and combined
with a demand flush of the xfsbufd we push all aged and promoted
buffers out at the same time.
Combine this with sorting the delayed write buffers to be written
into disk offset order before dispatch, and we vastly improve the
IO patterns for metadata writeback. IO is issued from one place and
in a disk/elevator friendly order.
- use generic list sort function
- when unmounting, push the delwri buffers first, then do sync inode
reclaim so that reclaim doesn't block for 15 seconds waiting for
delwri inode buffers to be aged and written before the inodes can
Perf results (average of 3 runs) on a debug XFS build (means allocation
patterns are randomly varied, so runtimes are also a bit variable):
Untar 2.6.32 kernel tarball, sync, then remove:
Untar+sync rm -rf
xfs-dev: 25.2s 13.0s
xfs-dev-delwri-1: 22.5s 9.1s
xfs-dev-delwri-2: 21.9s 8.4s
4 processes each creating 100,000, five byte files in separate
directories concurrently, then 4 processes removing a directory each
create rm -rf
xfs-dev: 8m32s 4m10s
xfs-dev-delwri-1: 4m55s 3m42s
xfs-dev-delwri-2: 4m56s 3m33s