On Mon, Nov 29, 2010 at 12:11 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Sun, Nov 28, 2010 at 10:51:04PM +0000, Yclept Nemo wrote:
>> After 3-4 years of using one XFS partition for every mount point
>> (/,/usr,/etc,/home,/tmp...) I started noticing a rapid performance
>> degradation. Subjectively I now feel my XFS partition is 5-10x slower
>> ... while other partitions (ntfs,ext3) remain the same.
> Can you run some benchmarks to show this non-subjectively? Aged
> filesystems will be slower than new filesytsems, and it should be
> measurable. Also, knowing what you filesystem contains (number of
> files, used capacity, whether you have run it near ENOSPC for
> extended periods of time, etc) would help us understand the way the
> filesytesm has aged as well.
Certainly, if you are interested I can run either dbench or bonnie++
tests comparing an XFS partition (with default values from xfsprogs
3.1.3) on the new hard-drive to the existing partition on the old. As
I'm not sure what you're looking for, what command parameters should I
The XFS partition in question is 39.61GB in size, of which 30.71GB are
in use (8.90GB free). It contains a typical Arch Linux installation
with many programs and many personal files. Usage pattern as follows:
. equal runtime split between (near ENOSPC) and (approximately 10.0GB free)
. mostly small files, one or two exceptions
. often reach ENOSPC through carelessness
. run xfs_fsr very often
Breakdown of space:
/var: 1005.1 MB
/etc: 58.8 MB
>> Now I just purchased a new hard-drive and I'm going to be copying all
>> my original files over onto a *new* XFS partition. When using mkfs.xfs
>> I'd like to optimize and avoid whatever it was that made my old XFS
>> partition slower than a snail.
> Nobody can give definite advice without first quantifying and then
> understanding the aging slowdown you've been seeing.
>> I was considering running "mkfs.xfs -d agcount=32 -i attr=2 -l
>> version=2,lazy-count=1,size=256m /dev/sda5".
>> Yes, I know that in xfs_progs 3.1.3 "-i attr=2 -l
>> version=2,lazy-count=1" are already default options. However I think I
>> should tweak the log size, blocksize, and data allocation group counts
>> beyond the default values and I'm looking for some recommendations or
> Why do you think you should tweak them?
To avoid the aging slowdown as well as to increase read/write/metadata
performance with small files.
>> I assume mkfs.xfs automatically selects optimal values, but I *have*
>> space to spare for a larger log section... and perhaps my old XFS
>> partition became sluggish when the log section had filled up, if this
>> is even possible.
> Well, you had a very small log (20MB) on the original filesystem,
> and so as the filesystems ages (e.g. free space fragments), each
> allocation/free transaction would be larger than on a new filesytsem
> because of the larger btrees that need to be manipulated. With such
> a small log, that could be part of the reason for the slowdown you
> were seeing. However, without knowing what you filesystem looks
> like physically, this is only speculation.
> That being said, the larger log (50MB) that the new filesystem has
> shouldn't have the same degree of degradation under the same aging
> characteristics. It's probably not necesary to go larger than ~100MB
> for partition of 100GB on a single spindle...
In this case I'll aim for a large log section, probably 256 or 512MB,
unless it will impede performance. That way there will be no problems
when I resize the partition to 200GB ... 300GB... up to a maximum of
450GB. In fact the manual page of xfs_growfs - which might be outdated
- warns that log resizing is not implemented, so it would probably be
auspicious to create an overly-large log section.
>> Similarly a larger agcount should always give better performance,
>> Some resources claim that agcount should never fall below
> If those resources are right, then why would we default to 4 AGs for
> filesystems on single spindles?
Obviously you are against modifying the agcount - I won't touch it :)
>> I'm also hesitant about reducing the blocksize from a maximum of 4096
>> bytes, but since XFS manages my entire file-system tree, a blocksize
>> of 512, 1024,or even 2048 bytes might squeeze out some extra
>> performance. [I assume] the performance w.r.t. blocksize is:
>> . a larger blocksize dramatically increases large file performance,
> No, it doesn't - maybe a few percent difference when you've got
> multiple GB/s throughput, but it's mostly noise for single spindles.
> Extent based allocation makes block size pretty much irrelevant for
> sequential write performance...
>> but also increases space usage when dealing with small files.
> Not significantly enough to matter for modern disks.
>> . a smaller blocksize dramatically decreases performance for large
> See above.
>> and somewhat increases performance for small files,
> Not really.
>> while also
>> slightly increasing space usage with extra inodes(?)
Not actually sure what I intended. My knowledge of file-systems
depends on Google and that statement was only a shot in the dark.
However, you've convinced me not to change the blocksize (keep in mind
I'm running an entire Linux installation from this one XFS partition,
small files included). If the blocksize option is so
performance-independent, why does it even exist?
>> I want to make it clear that I prefer performance over space efficiency.
> That's what the defaults are biased towards.
Good to know the defaults are sane - yet another reason not to modify
the blocksize and data allocation group counts.