|To:||Eric Sandeen <sandeen@xxxxxxxxxxx>|
|Subject:||Re: very slow file deletion on an SSD|
|From:||Joe Landman <joe.landman@xxxxxxxxx>|
|Date:||Sun, 27 May 2012 13:14:41 -0400|
|Cc:||Krzysztof Adamski <k@xxxxxxxxxxx>, Stefan Ring <stefanrin@xxxxxxxxx>, linux-raid <linux-raid@xxxxxxxxxxxxxxx>, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>|
|Dkim-signature:||v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=1cP0oKa8LESl+tG9c5xYkrs7is1gi9EzJNFTRZecI/I=; b=ItkLWbde4D1VoJvo/L6913Kv8qQT/VjvPzwjUmRhgMgdWip86QkP9s65exQ/Tc1q26 iwRJk4l2+v9NgvvIme5kR6FKAMYiG2eDvugU/M73YX3RP+he2am8hGCW6NmwiOP/33j4 IUcnvOrJvDWQ1Npu/UAKzYGVPlMZ0J8SfIvaOejxcSTtpF90/rAdW19ZPALC80XUKpeo zelOORsicj9qCL5eBdHQhLLNqvgPuYh4KaDR8TtcFQ6+aRlzDNlEyYpQwUAX4xn0uWq+ HnsPyARHWWBqbdngkiGmD0o+L0Hgn2w8yIbH+pfcppCjHQcAfJF7bjyCP8UXWdx714Z5 cAkg==|
|References:||<4FBF60D1.80104@xxxxxxxxx> <20120526231838.GR25351@dastard> <4FC16683.9060800@xxxxxxxxx> <20120527000701.GS25351@dastard> <4FC18845.6030301@xxxxxxxxx> <4FC19408.5020502@xxxxxxxxxxx> <CAAxjCEzX4dmm6YuR__1_a6mw+D=vizV0VrCLqCC-d5GSgkbE6g@xxxxxxxxxxxxxx> <1338124504.28212.255.camel@xxxxxxxxxxxxxxxxxx> <5856C5F0-C13E-415D-907B-491C1BBCC0C2@xxxxxxxxx> <4FC25126.7070002@xxxxxxxxxxx>|
|User-agent:||Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1|
On 05/27/2012 12:07 PM, Eric Sandeen wrote:
On 5/27/12 9:59 AM, joe.landman@xxxxxxxxx wrote:This is going to be a very fragmented file. I am guessing that this is the reason for the long duration delete. I'll do some more measurements before going to 3.4.x as per Eric's note.filefrag -v should also tell you how many fragments, and because it uses fiemap it probably won't run into the same problems. But it sounds like we can just assume very high fragmentation.
[root@siFlash test]# filefrag 1.r.48.0 1.r.48.0: 1364 extents found
It's not addressing the exact issue, but why are the files so fragmented? Are they very hole-y or is it just an issue with how they are written? Perhaps preallocation would help you here?
Possibly. We are testing the system using fio, and doing random reads and writes. I'll see if we can do a preallocation scheme (before/during) for the files.
So to summarize, the delete performance will be (at least) in part a function of the fragmentation? A directory full of massively fragmented files will take longer to delete than a directory of contiguous and larger extents? And I did some experimentation using xfs_repair, and it seems to be the case there as well ... the higher level of fragmentation, the longer the repair seems to take.
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: very slow file deletion on an SSD, Eric Sandeen|
|Next by Date:||Re: very slow file deletion on an SSD, Joe Landman|
|Previous by Thread:||Re: very slow file deletion on an SSD, Eric Sandeen|
|Next by Thread:||Re: very slow file deletion on an SSD, Peter Grandi|
|Indexes:||[Date] [Thread] [Top] [All Lists]|