xfs
[Top] [All Lists]

Re: deleting 2TB lots of files with delaylog: sync helps?

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: deleting 2TB lots of files with delaylog: sync helps?
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 1 Sep 2010 13:41:56 +1000
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4C7DC21B.1040705@xxxxxxxxxxxxxxxxx>
References: <201009010130.41500@xxxxxx> <20100901000631.GO705@dastard> <4C7DC21B.1040705@xxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Tue, Aug 31, 2010 at 10:01:47PM -0500, Stan Hoeppner wrote:
> Dave Chinner put forth on 8/31/2010 7:06 PM:
> 
> > You're probably CPU bound, not IO bound.
> 
> 7200 rpm is the highest spindle speed for 2TB drives--5400 is most
> common.  None of them are going to do much over 200 random seeks/second,
> if that.  That's 400 tops for two drives.
> 
> Using any modern Intel/AMD ~2 GHz CPU, you think he's CPU bound?

Absolutely.

> Apparently this "rm -rf" type operation is much more complex than I
> previously believed.

Nothing in XFS is simple. ;)

Unlinks that free the inode clusters results in no inode writeback
load, so the majority of the IO is log traffic. Hence they are
either log IO bound or read latency bound.  A pair of 2TB SATA
drives will be good for at least 150MB/s of log throughput, but
the numbers are nowhere near that.

Without delayed logging, 150MB/s is enough for a single threaded
unlink to consume an entire CPU core on any modern CPU, and there
maybe enough bandwidth for two threads to max out 2 CPUs. With
delaylog, log throughput is reduced by an order of magnitude, so
should be good for at least 10x that number of CPU cores running
flat out unless they are latency bound reading the directories
and inodes into memory.....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>