Joe Hsu wrote:
> Well, I have multiple processes running concurrently, and each writes
> on its own files( the sort of files I mentioned), while at the same
> time, I have other programs doing normal but light I/O to other files
> on the same xfs partition.
> Once I thought maybe I can pre-allocate these special files within a
> directory, which has fixed allocation groups(I guess that means fixed
> sets of blocks), and then I can try to make 'truncate to 0 and
> pre-allocate' requests sequential for the running processes. But, XFS
> seems to have this feature, I cannot find how to do that.
There is no interface to re-mark existing blocks as unwritten, I'm
afraid. It sounds like an interesting interface, but it's not there
> Why am I doing this? Why not just over-write it? When doing partial
> over-writing, some blocks may be read for partial update before they
> are written out. This hurts some IO performance
I guess it's not possible for you to do whole-block IO instead? Or even
pad out the writes to block boundaries if needed?
> After days of testing(I only ftruncate to 0 and re-preallocate files
> if needed), fragmentation become much more serious, sigh
It's interesting that it's so bad, I'd have hoped that if you free a
contiguous chunk of blocks and then immediately reallocate them on the
same inode, that they'd get preallocated nicely.... How bad is it?
> 2009/5/22 Eric Sandeen <sandeen@xxxxxxxxxxx>:
>> Joe Hsu wrote: Do you really need the exact same blocks? What if
>> you just truncate to 0 & re-allocate?