very slow file deletion on an SSD
Joe Landman
joe.landman at gmail.com
Sun May 27 12:14:41 CDT 2012
On 05/27/2012 12:07 PM, Eric Sandeen wrote:
> On 5/27/12 9:59 AM, joe.landman at gmail.com wrote:
>> This is going to be a very fragmented file. I am guessing that this
>> is the reason for the long duration delete. I'll do some more
>> measurements before going to 3.4.x as per Eric's note.
>
> filefrag -v should also tell you how many fragments, and because it
> uses fiemap it probably won't run into the same problems.
>
> But it sounds like we can just assume very high fragmentation.
>
[root at siFlash test]# filefrag 1.r.48.0
1.r.48.0: 1364 extents found
> It's not addressing the exact issue, but why are the files so fragmented?
> Are they very hole-y or is it just an issue with how they are written?
> Perhaps preallocation would help you here?
Possibly. We are testing the system using fio, and doing random reads
and writes. I'll see if we can do a preallocation scheme
(before/during) for the files.
So to summarize, the delete performance will be (at least) in part a
function of the fragmentation? A directory full of massively fragmented
files will take longer to delete than a directory of contiguous and
larger extents? And I did some experimentation using xfs_repair, and it
seems to be the case there as well ... the higher level of
fragmentation, the longer the repair seems to take.
>
> -Eric
More information about the xfs
mailing list