xfs
[Top] [All Lists]

Re: very slow file deletion on an SSD

To: Eric Sandeen <sandeen@xxxxxxxxxxx>
Subject: Re: very slow file deletion on an SSD
From: Joe Landman <joe.landman@xxxxxxxxx>
Date: Sun, 27 May 2012 13:17:05 -0400
Cc: Krzysztof Adamski <k@xxxxxxxxxxx>, Stefan Ring <stefanrin@xxxxxxxxx>, linux-raid <linux-raid@xxxxxxxxxxxxxxx>, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=w45VJWxohPaZezlgHqnEl8dSfyx97Vilp5YN97IOzVM=; b=x2ap/5n+VTIKN6qixQV2B89Ph5Upv6flacXY1d1coX1E2cYQBDEZwHIF0nRiKNCbIm 2nE2QTfdcZkgbMxXk6tkv/4bCiG3uY4Lk1Xl+0uQuFsYF9ux5eOuhmFavr+Q8QF7GF4P GWTA0tzfd3sWNCyZJ3WGUYZLDsW+B34jZFG7rGJwgM47BhDeVN3tfy58l1zwfKKmRsDO bY71OgnpOYFhldVTrgQehkITXio2oWvvWp1p6MVkPCXNJF/v2UGfoZgMdYlKa8IRS/Qd H4GS8ZV0i7wS64uPwFZttGrsZLAvfC2q8ZDL+TXLO+e5tS7YaI4/3wTKz/khK269REQu lzOA==
In-reply-to: <4FC25126.7070002@xxxxxxxxxxx>
References: <4FBF60D1.80104@xxxxxxxxx> <20120526231838.GR25351@dastard> <4FC16683.9060800@xxxxxxxxx> <20120527000701.GS25351@dastard> <4FC18845.6030301@xxxxxxxxx> <4FC19408.5020502@xxxxxxxxxxx> <CAAxjCEzX4dmm6YuR__1_a6mw+D=vizV0VrCLqCC-d5GSgkbE6g@xxxxxxxxxxxxxx> <1338124504.28212.255.camel@xxxxxxxxxxxxxxxxxx> <5856C5F0-C13E-415D-907B-491C1BBCC0C2@xxxxxxxxx> <4FC25126.7070002@xxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1
On 05/27/2012 12:07 PM, Eric Sandeen wrote:
On 5/27/12 9:59 AM, joe.landman@xxxxxxxxx wrote:
This is going to be a very fragmented file.  I am guessing that this
is the reason for the long duration delete.   I'll do some more
measurements before going to 3.4.x as per Eric's note.

filefrag -v should also tell you how many fragments, and because it
uses fiemap it probably won't run into the same problems.

But it sounds like we can just assume very high fragmentation.

It's not addressing the exact issue, but why are the files so fragmented?
Are they very hole-y or is it just an issue with how they are written?
Perhaps preallocation would help you here?

... and one pass with xfs_fsr seems to have "fixed" the problem

[root@siFlash test]# xfs_fsr
xfs_fsr -m /proc/mounts -t 7200 -f /var/tmp/.fsrlast_xfs ...
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
/data/1 start inode=0
/data/2 start inode=0
/data/3 start inode=0
Completed all 10 passes
[root@siFlash test]# filefrag  1.r.48.0
1.r.48.0: 1 extent found

[root@siFlash test]# rm -f 1.r.48.0

(very fast)

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@xxxxxxxxxxxxxxxxxxxxxxx
web  : http://scalableinformatics.com
       http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

<Prev in Thread] Current Thread [Next in Thread>