The updated speculative preallocation algorithm becomes less
effective in situations with a high number of concurrent,
sequential writers. In running 32 sequential writers on a system
with 32GB RAM, preallocs become fixed at a value of around 128MB.
Update the heuristic to base the size of the prealloc on double
the size of the preceding extent. This preserves the original
aggressive speculative preallocation behavior at a slight cost of
increasing the size of preallocated data regions following holes of
sparse files.
Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx>
---
Hi all,
This is based on Dave's suggestion on IRC to address the high concurrent
writer defeating speculative preallocation problem. This addresses the reported
problem and generally preserves the prealloc limiting within sparse files:
xfs_io -f -c "pwrite 0 31m" \
-c "pwrite 33m 1m" \
-c "pwrite 128m 1m" \
-c "fiemap -v" /mnt/file
wrote 32505856/32505856 bytes at offset 0
31 MiB, 7936 ops; 0.0000 sec (1.082 GiB/sec and 283621.0286 ops/sec)
wrote 1048576/1048576 bytes at offset 34603008
1 MiB, 256 ops; 0.0000 sec (626.174 MiB/sec and 160300.5636 ops/sec)
wrote 1048576/1048576 bytes at offset 134217728
1 MiB, 256 ops; 0.0000 sec (624.220 MiB/sec and 159800.2497 ops/sec)
/mnt/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..65407]: 232..65639 65408 0x0
1: [65408..67583]: hole 2176
2: [67584..71551]: 67816..71783 3968 0x0
3: [71552..262143]: hole 190592
4: [262144..266111]: 71784..75751 3968 0x1
Brian
fs/xfs/xfs_iomap.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 912d83d..45a382d 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -362,7 +362,7 @@ xfs_iomap_eof_prealloc_initial_size(
if (imap[0].br_startblock == HOLESTARTBLOCK)
return 0;
if (imap[0].br_blockcount <= (MAXEXTLEN >> 1))
- return imap[0].br_blockcount;
+ return imap[0].br_blockcount << 1;
return XFS_B_TO_FSB(mp, offset);
}
--
1.7.7.6
|