[ ... 2s CPU time to delete a file ... ]
> [root@siFlash test]# filefrag 1.r.48.0
> 1.r.48.0: 1364 extents found
>> It's not addressing the exact issue, but why are the files so
>> fragmented? Are they very hole-y or is it just an issue with
>> how they are written? Perhaps preallocation would help you
That's not that fragmented, and I am a bit surprised that it
takes 2s of CPU time to delete a bit over 1,000 extents (unless
there is a repeated linear search). I have seen over 800,000
extents in a file created on XFS by someone who thought it was a
fine idea to have sparse virtual disk images, especially for a
rapidly growing Maildir archive...
[ ... ]
> So to summarize, the delete performance will be (at least) in
> part a function of the fragmentation? [ ... ] A directory
> full of massively fragmented files will take longer to delete
> than a directory of contiguous and larger extents?
The list of extents is part of filesystem metadata. It's also
what happens with traditional designs with block-tree based
filesystem, deleting a large 3 level tree of blocks can take a
very long time (but mostly seeks, not CPU).
> And I did some experimentation using xfs_repair, and it seems to
> be the case there as well ... the higher level of fragmentation,
> the longer the repair seems to take.
That's expected too: more metadata to verify.
In some way a file over many extents behaves like a non-extent
The special detail about many extents and XFS is that XFS loads
and keeps in memory the entire extent table on filen open. This to
me seems a reasonable design choice, as XFS tries pretty hard to
ensure that files have few extents or offers applications ways
(that probably few applications use) to improve the chances of