On Wed, Oct 01, 2008 at 02:52:37PM -0300, Peter Cordes wrote:
> I just had an idea for speeding up writes to parity-based RAIDs
> (RAID4,5,6). If XFS wants to write sectors 1,2,3, 5,6,7, but it
> knows that block 4 is free space, it might be better to write sector 4
> (with zeros, don't put uninitialized kernel memory on disk!).
How does XFS know that block 4 is free space? Or indeed that this is
a single block sized hole in range of blocks mapped to different inodes
or filesystem metadata?
If you want something like this, you need to have the lower layer
discover holes like this and instead of immediately initiating
a RMW cycle, it calls back to the filesystem to determine is hole
is free space. That works for all filesystems not just XFS.
> probably only useful to do this if XFS has data in memory to prove
> that the gap is not part of the filesystem. Doing extra reads
> probably doesn't make sense except in very special cases. (e.g.
> repeated writes to the same location with the same hole, so just one
> read would let them all become full-block or even full-stripe writes.)
That's the sort of workload the stripe cache is supposed to optimise;
every subsequent sparse write to the same stripe line avoids the
read part of the RMW cycle. The filesystem is the wrong layer to
optimise this type of workload....
FWIW, XFS has it's own problems with writeback triggering RMW
cycles - this sort of thing for data could be considered noise
compared to the RMW storm that can be caused by inode writeback
under memory pressure as XFS has to do RMW cycles itself on the
inode cluster buffers. See the Inode Writeback section of this
This can only be fixed at the filesystem level because no amount of
tweaking the storage can improve the I/O patterns that XFS is
issuing. These RMW cycles in inode writeback can cause the inode
flush rate to drop to a few tens of inodes per second. When you have
hundreds of thousands of dirty inodes in a system, it can take
*hours* to flush the dirty inodes to disk....