xfs
[Top] [All Lists]

Re: Performance problem with multiple parallel rm -rf's

To: Jens Rosenboom <j.rosenboom@xxxxxxxxxxxx>
Subject: Re: Performance problem with multiple parallel rm -rf's
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 3 Dec 2009 12:05:26 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4B163B20.6030808@xxxxxxxxxxxx>
References: <4B163B20.6030808@xxxxxxxxxxxx>
User-agent: Mutt/1.5.18 (2008-05-17)
On Wed, Dec 02, 2009 at 11:02:08AM +0100, Jens Rosenboom wrote:
> On a large 13TB XFS volume that is being used for backups, I am seeing  
> bad performance if multiple "rm -rf" processes are running in parallel.  
> The backups are being done with rsnapshot and the first operation it  
> does is removing the oldest snapshot. A single rsnapshot does this in  
> reasonable time, but if four jobs are started at the same time, all  
> their rm processes run for hours without making much progress.
>
> This seems to be related to the planned optimizations in
>
> http://xfs.org/index.php/Improving_Metadata_Performance_By_Reducing_Journal_Overhead

Not directly, I think. More likely is the effect of cold caches
on the inode read rate.

That is, if you are running cold-cache 'rm -rf' operations, there is
a substantial amount of *read* IO executed to pull the inodes into
memory before they are unlinked. (i.e. an unlink is roughly
1 read IO and two write IOs).

If you are doing multiple cold-cache 'rm -rf' in parallel, you
will be causing more disk seeks for reading the inodes you are
trying to unlink, and as such this will slow down the unlink rate
as the unlink can only go as fast as the inodes can be read off
disk.

Effectively there is not much you can do about this - you could try
doing a traversal of the old snapshot first (e.g
ls -lR <snapshot> > /dev/null) to get the cache populated as fast as
possible before doing the unlink traversal, but that requires you
have plenty of memory available (i.e. to hold more inodes than
multiple parallel snapshot traversals will read).

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>