xfs
[Top] [All Lists]

Re: Xfs delaylog hanged up

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: Xfs delaylog hanged up
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 24 Nov 2010 11:20:23 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4CEC3CB8.8000509@xxxxxxxxxxxxxxxxx>
References: <4CEAC412.9000406@xxxxxxxxxxxxx> <20101122232929.GJ13830@dastard> <4CEBA2D5.2020708@xxxxxxxxxxxxx> <20101123204609.GW22876@dastard> <4CEC3CB8.8000509@xxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Tue, Nov 23, 2010 at 04:14:16PM -0600, Stan Hoeppner wrote:
> Dave Chinner put forth on 11/23/2010 2:46 PM:
> 
> > I've been unable to reproduce the problem with your test case (been
> > running over night) on a 12-disk, 16TB dm RAID0 array, but I'll keep
> > trying to reproduce it for a while. I note that the load is
> > generating close to 10,000 iops on my test system, so it may very
> > well be triggering load related problems in your raid controller...
> 
> Somewhat off topic, but how are you generating 10,000 IOPS by carving a
> 16TB LUN/volume from 12 x 2TB SATA disk spindles?  Such drives aren't
> even capable of 200 seeks per second.  Even if they were you'd top out
> at less than 2,500 IOPS (random).  16TB/12=1.33 TB per disk.  No such
> capacity disk exists.  So I assume you're using 12 x 2TB disks and
> slicing/dicing out 16TB.  What am I missing Dave?

512MB of BBWC backing the disks. The BBWC does a much better job of
reordering out-of-order writes than the Linux elevators because
512MB is a much bigger window than a couple of thousand 4k IOs.
Hence metadata/small file intensive workloads go much faster than
you'd expect from just looking at the IO patterns and the capability
of the disks.

IOWs, for write workloads that are not purely random, the disk
subsystem behaves more like an SSD than a RAID0 array of spinning
rust...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>