xfs
[Top] [All Lists]

[PATCH v2 0/7] speculative preallocation quota throttling

To: xfs@xxxxxxxxxxx
Subject: [PATCH v2 0/7] speculative preallocation quota throttling
From: Brian Foster <bfoster@xxxxxxxxxx>
Date: Wed, 2 Jan 2013 13:08:04 -0500
Hi all,

This is v2 of the speculative prealloc. throttling set. The primary changes are
to move the algorithm to a logarithmic throttler (akin to current ENOSPC
throttling) and adjust the implementation to separate throttle trigger logic 
from
the throttle itself.

This has seen some decent sanity testing so far... several xfstests runs and 
some
tests that involve driving 32 threads into various limits to measure effective
use of free space (results below).

One thing I noticed is that this scheme is less effective as the linear 
throttling
approach at smaller limits because the low free space trigger is small relative
to our maximum preallocation size (i.e., 5% of a 320GB limit is 16GB, and the 
max
prealloc size is 8GB). I was a bit confused by the behavior here as compared to
the global prealloc throttling (which works quite well at a comparable fs size),
so I ran a little experiment to run this same test into global ENOSPC in a 
version
of XFS without the ENOSPC flush/retry sequence and reproduced a similar 
reduction
in effectiveness (e.g., the test stops at 285GB used vs. 320GB).

To summarize, while the flush/retry might not directly free up this space, I
suspect the time spent scanning and retrying indirectly allows this test to 
carry
forward. I don't think this is necessarily a drawback of this approach, just a
point of clarification for myself and a data point that suggests the combination
of this throttle mechanism and an introduction of an eofblocks scan/retry
sequence[1] should work together to provide good behavior at smaller limits or 
at
scale.

Brian

[1] - http://oss.sgi.com/archives/xfs/2012-12/msg00112.html

--- Test Results

Run 32 writers (10GB each) into a quota limit with the remaining free space
(i.e., limit-320GB) pre-consumed via fallocate:

iozone -w -c -e -i 0 -+n -r 4k -s 10g -t 32 -F /mnt/data/file{0..31}

Results measured in space consumed prior to the test stopping either due to
completion or error (EDQUOT/ENOSPC). 320GB is the ideal result:

512GB uquota limit
- Baseline      - 273GB
- Throttling    - 291GB

1TB uquota limit:
- Baseline      - 275GB
- Throttling    - 293GB

5TB uquota limit:
- Baseline      - 273GB
- Throttling    - 321GB (no error)

I also ran a test with a 512GB pquota limit. The space usage remains at around
320GB because pquotas result in ENOSPC, but the delalloc_enospc tracepoint is
only triggered 136 times with throttling vs. 3940318 without.

v2:
- Fix up xfs_iomap_prealloc_size() rounding (patch 2).
- Add pre-calculated fields to xfs_dquot to support throttling.
- Move to logarithmic (shift) throttler and finer tuned trigger/throttle logic.

Brian Foster (7):
  xfs: reorganize xfs_iomap_prealloc_size to remove indentation
  xfs: push rounddown_pow_of_two() to after prealloc throttle
  xfs: cap prealloc size to free space before shift
  xfs: pass xfs_dquot to xfs_qm_adjust_dqlimits() instead of
    xfs_disk_dquot_t
  xfs: xfs_dquot prealloc throttling watermarks and low free space
  xfs: add quota-driven speculative preallocation throttling
  xfs: xfs_iomap_prealloc_size() tracepoint

 fs/xfs/xfs_dquot.c       |   46 +++++++++++++-
 fs/xfs/xfs_dquot.h       |   15 ++++-
 fs/xfs/xfs_iomap.c       |  150 +++++++++++++++++++++++++++++++++++++--------
 fs/xfs/xfs_qm.c          |    2 +-
 fs/xfs/xfs_qm_syscalls.c |    1 +
 fs/xfs/xfs_trace.h       |   24 +++++++
 fs/xfs/xfs_trans_dquot.c |    2 +-
 7 files changed, 207 insertions(+), 33 deletions(-)

-- 
1.7.7.6

<Prev in Thread] Current Thread [Next in Thread>