On Fri, Oct 15, 2010 at 08:51:24AM +0200, Michael Monnerie wrote:
> On Donnerstag, 14. Oktober 2010 Dave Chinner wrote:
> > > I guess the reason one might want the "allocsize" mount
> > > option now becomes the opposite of why one might have
> > > wanted it before. I.e., it would be used to reduce
> > > the size of the preallocated range beyond EOF, which I
> > > could envision might be reasonable in some circumstances.
> > It now becomes the minimum preallocation size, rather than both the
> > minimum and the maximum....
> Until now, I often set allocsize to be <nr of data disks>*<stripe size>,
> i.e. in a 8 disk RAID-6 with 64KB stripe size = 6*64 = 384KB
> I guess this should provide the best performance.
It's not doing what you think it is:
> Is my assumption true?
Sets the buffered I/O end-of-file preallocation size when
doing delayed allocation writeout (default size is 64KiB).
Valid values for this option are page size (typically 4KiB)
through to 1GiB, inclusive, in power-of-2 increments.
The code will round the value down to the nearest power of 2, which
means you're actually telling it to preallocate 256k at a time.
> Will it change with the new code?
Entirely possible. We can and do change behaviour of mount options
when it results in an improvement.
> Does XFS automatically use allocsize=<1 full stripe> so I can skip my
> manual allocsize options?
No. I will refer you to the swalloc mount option, though:
Data allocations will be rounded up to stripe width boundaries
when the current end of file is being extended and the file
size is larger than the stripe width size.
Which affects both delayed allocation (after speculative prealloc
has been calculated) and physical allocation for direct IO.