On Wed, Feb 05, 2014 at 02:29:26PM 0500, Brian Foster wrote:
> On 02/05/2014 12:47 AM, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> >
> > Recent changes to the log size scaling have resulted in using the
> > default size multiplier for the log size even on small filesystems.
> > Commit 88cd79b ("xfs: Add xfs_log_rlimit.c") changed the calculation
> > of the maximum transaction size that the kernel would issues and
> > that significantly increased the minimum size of the default log.
> > As such the size of the log on small filesystems was typically
> > larger than the prefious default, even though the previous default
> previous
> > was still larger than the minimum needed.
> >
>
> Hey Dave,
>
> Can you elaborate on what you mean by the previous default being larger
> than the minimum needed? If that is the case, doesn't that mean the
> calculations based on the max transaction size are not valid? Perhaps
> I'm not parsing something here.
The previous change for calculating the minimum log size also
changed the default log size at the same time. i.e. it changed it to
min_logblocks * XFS_DFL_LOG_FACTOR if that fit inside an AG.
Here's an example with small AGs (25MB). Current mkfs:
# mkfs.xfs f d size=100m /dev/vdc grep log
log =internal log bsize=4096 blocks=4265, version=2
That's a 17MB log in a 100MB filesystem. Way too large, and here's
what prompted this patch:
# mkfs.xfs f d size=100m,sunit=512,swidth=2048 /dev/vdc grep log
log =internal log bsize=4096 blocks=1728, version=2
i.e add stripe unit alignment, and the log gets 3x smaller. Say
what?
The new mkfs gives:
# ~/packages/mkfs.xfs f d size=100m /dev/vdc grep log
log =internal log bsize=4096 blocks=2560, version=2
# ~/packages/mkfs.xfs f d size=100m,sunit=512,swidth=2048 /dev/vdc grep log
log =internal log bsize=4096 blocks=2560, version=2
a 10MB log, which is consistent but arguably still too large for a
100MB filesystem.
So, what did we used to do in the 3.1.x series?
# aptget install xfsprogs/stable
<installs 3.1.7>
# mkfs.xfs f d size=100m /dev/vdc grep log
log =internal log bsize=4096 blocks=1200, version=2
# mkfs.xfs f d size=100m,sunit=512,swidth=2048 /dev/vdc grep log
log =internal log bsize=4096 blocks=1216, version=2
Ok, so historically it's been min_logblock sized....
> > Rework the default log size calculation such that it will use the
> > original log size default if it is larger than the minimum log size
> > required, and only use a larger log if the configuration of the
> > filesystem requires it.
> >
> > This is especially obvious in xfs/216, where the default log size is
> > 10MB all the way up to 16GB filesystems. The current mkfs selects a
> > log size of 50MB for the same size filesystems and this is
> > unnecessarily large.
> >
> > Return the scaling of the log size for small filesystems to
> > something similar to what xfs/216 expects.
> >
> > Signedoffby: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > mkfs/xfs_mkfs.c  19 ++++++++++
> > 1 file changed, 10 insertions(+), 9 deletions()
> >
> > diff git a/mkfs/xfs_mkfs.c b/mkfs/xfs_mkfs.c
> > index d82128c..4a29eea 100644
> >  a/mkfs/xfs_mkfs.c
> > +++ b/mkfs/xfs_mkfs.c
> > @@ 2377,17 +2377,18 @@ _("size %s specified for log subvolume is too
> > large, maximum is %lld blocks\n"),
> > logblocks = MAX(min_logblocks, logblocks);
> >
> > /*
> >  * If the default log size doesn't fit in the AG size, use the
> >  * minimum log size instead. This ensures small filesystems
> >  * don't use excessive amounts of space for the log.
> > + * For small filesystems, we want to use the XFS_MIN_LOG_BYTES
> > + * for filesystems smaller than 16G if at all possible, ramping
> > + * up to 128MB at 256GB.
> > */
> >  if (min_logblocks * XFS_DFL_LOG_FACTOR >= agsize) {
> >  logblocks = min_logblocks;
> >  } else {
> >  logblocks = MAX(logblocks,
> >  MAX(XFS_DFL_LOG_SIZE,
> >  min_logblocks * XFS_DFL_LOG_FACTOR));
> > + if (dblocks < GIGABYTES(16, blocklog)) {
> > + logblocks = MIN(XFS_MIN_LOG_BYTES >> blocklog,
> > + min_logblocks * XFS_DFL_LOG_FACTOR);
> > }
>
> Nit: extra space after tab before the 'if (dblocks < GIGABYTES(...)) {'
> line...
Oops. Will fix.
> More generally... by the time we get here, min_logblocks is at least
> XFS_MIN_LOG_BLOCKS and XFS_MIN_LOG_BYTES (if the fs is >=1GB). The only
> way we would use the min_logblocks based value is if min_logblocks is
> less than 1/5 of XFS_MIN_LOG_BYTES (due to DFL_LOG_FACTOR). After
> testing this a bit, creating a 20MB fs with 4k blocks gives me an
> initial min_logblocks of 853, which works out to ~16MB after
> DFL_LOG_FACTOR. So this effectively looks like an assignment of
> XFS_MIN_LOG_BYTES in that case.
Good point  I hadn't considered the initial >1GB check much further
up the stack, and that explains the difference between my patch and
the 3.1.x code.
> In the sub 1GB case, we skip the existing XFS_MIN_LOG_BYTES check, but
> this new block of code just adds it back, at least in the internal log case.
Right, this would handle it, I think.
if (dblocks < GIGABYTES(1, blocklog)) {
logblocks = min_logblocks;
else if (dblocks < GIGABYTES(16, blocklog)) {
logblocks = MIN(XFS_MIN_LOG_BYTES >> blocklog,
min_logblocks * XFS_DFL_LOG_FACTOR);
} else {
/*
* With a 2GB max log size, default to maximum size
* at 4TB. This keeps the same ratio from the older
* max log size of 128M at 256GB fs size. IOWs,
* the ratio of fs size to log size is 2048:1.
*/
logblocks = (dblocks << blocklog) / 2048;
logblocks = logblocks >> blocklog;
logblocks = MAX(min_logblocks, logblocks);
}
Cheers,
Dave.

Dave Chinner
david@xxxxxxxxxxxxx
