xfs
[Top] [All Lists]

Re: Performance problem with multiple parallel rm -rf's

To: Jens Rosenboom <j.rosenboom@xxxxxxxxxxxx>
Subject: Re: Performance problem with multiple parallel rm -rf's
From: Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx>
Date: Wed, 2 Dec 2009 09:49:27 -0500 (EST)
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4B163B20.6030808@xxxxxxxxxxxx>
References: <4B163B20.6030808@xxxxxxxxxxxx>
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)


On Wed, 2 Dec 2009, Jens Rosenboom wrote:

On a large 13TB XFS volume that is being used for backups, I am seeing bad performance if multiple "rm -rf" processes are running in parallel. The backups are being done with rsnapshot and the first operation it does is removing the oldest snapshot. A single rsnapshot does this in reasonable time, but if four jobs are started at the same time, all their rm processes run for hours without making much progress.

This seems to be related to the planned optimizations in

http://xfs.org/index.php/Improving_Metadata_Performance_By_Reducing_Journal_Overhead

Are there any other tuning options I might try? I'm already using "noatime,nodiratime,nobarrier,logbufs=8,logbsize=256k" as mount options and did enable lazy_counters for the fs.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


For faster rm performance you need a bigger log (128m-256mb), that is shown to increase delete performance. But I am not sure there is a way to change the size of it once the array has been created. An alternative may be creating another volume for a log.

Also: nodiratime is not needed, as it is implied by noatime.

Justin.

<Prev in Thread] Current Thread [Next in Thread>