On 6/2/11 2:24 PM, Paul Anderson wrote:
> The data itself has very odd lifecycle behavior, as well - since it is
> research, the different stages are still being sorted out, but some
> stages are essentially write once, read once, maybe keep, maybe
> discard, depending on the research scenario.
> The bulk of the work is not small-file - almost all is large files.
Out of curiosity, do your writers use the fallocate() call? If not, how
fragmented do your filesystems get?
Even if most of your data isn't read very often, it seems like a good
idea to minimize its fragmentation because that also reduces
fragmentation of the free list, which makes it easier to keep contiguous
other files that *are* heavily read. Also, fewer extents per file means
less metadata per file, ergo less metadata and log I/O, etc.
When a writer knows in advance how big a file will be, I can't see any
downside to having it call fallocate() to let the file system know. Soon
after I switched to XFS six months ago I've been running locally patched
versions of rsync/tar/cp and so on, and they really do minimize
fragmentation with very little effort.