xfs
[Top] [All Lists]

Re: rm -f * on large files very slow on XFS + MD RAID 6 volume of 15x 4T

To: Speedy Milan <speedy.milan@xxxxxxxxx>
Subject: Re: rm -f * on large files very slow on XFS + MD RAID 6 volume of 15x 4TB of HDDs (52TB)
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 23 Apr 2014 12:18:35 +1000
Cc: linux-kernel@xxxxxxxxxxxxxxx, Ivan Pantovic <gyro.ivan@xxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CAHuzUScfp19c_th_pfsZs05+yDz34MuEH-P1f+FF1dcivfH=5Q@xxxxxxxxxxxxxx>
References: <CAHuzUScfp19c_th_pfsZs05+yDz34MuEH-P1f+FF1dcivfH=5Q@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
[cc xfs@xxxxxxxxxxx]

On Mon, Apr 21, 2014 at 10:58:53PM +0200, Speedy Milan wrote:
> I want to report very slow deletion of 24 50GB files (in total 12 TB),
> all present in the same folder.

total = 1.2TB?

> OS is CentOS 6.4, with upgraded kernel 3.13.1.
> 
> The hardware is a Supermicro server with 15x 4TB WD Se drives in MD
> RAID 6, totalling 52TB of free space.
> 
> XFS is formated directly on the RAID volume, without LVM layers.
> 
> Deletion was done with rm -f * command, and it took upwards of 1 hour
> to delete the files.
> 
> File system was filled completely prior to deletion.

Oh, that's bad. it's likely you fragmented the files into
millions of extents?

> rm was mostly waiting (D state), probably for kworker threads, and

No, waiting for IO.

> iostat was showing big HDD utilization numbers and very low throughput
> so it looked like a random HDD workload was in effect.

Yup, smells like file fragmentation. Non-fragmented 50GB files
should be removed in a few milliseconds. but if you've badly
fragmented the files, there could be 10 million extents in a 50GB
file. A few milliseconds per extent removal gives you....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>