On Thu, Jun 02, 2011 at 04:59:25PM -0700, Phil Karn wrote:
> On 6/2/11 2:24 PM, Paul Anderson wrote:
> > The data itself has very odd lifecycle behavior, as well - since it is
> > research, the different stages are still being sorted out, but some
> > stages are essentially write once, read once, maybe keep, maybe
> > discard, depending on the research scenario.
> > The bulk of the work is not small-file - almost all is large files.
> Out of curiosity, do your writers use the fallocate() call? If not, how
> fragmented do your filesystems get?
> Even if most of your data isn't read very often, it seems like a good
> idea to minimize its fragmentation because that also reduces
> fragmentation of the free list, which makes it easier to keep contiguous
> other files that *are* heavily read. Also, fewer extents per file means
> less metadata per file, ergo less metadata and log I/O, etc.
> When a writer knows in advance how big a file will be, I can't see any
> downside to having it call fallocate() to let the file system know.
You're ignoring the fact that delayed allocation effectively does
this for you without needing to physically allocate the blocks.
So when you have files that are short lived, you don't actually do
any allocation at all, Further delayed allocation results in
allocation order according to writeback order rather than write()
order, so I/O patterns are much nicer when using delayed allocation.
Basicaly you are removing one of the major IO optimisation
capabilities of XFS by preallocating everything like this.
> after I switched to XFS six months ago I've been running locally patched
> versions of rsync/tar/cp and so on, and they really do minimize
> fragmentation with very little effort.
So you don't have any idea of how well XFS minimises fragmentation
without needing to use preallocation? Sounds like you have a classic
case of premature optimisation. ;)