xfs
[Top] [All Lists]

Re: Xfs delaylog hanged up

To: Spelic <spelic@xxxxxxxxxxxxx>
Subject: Re: Xfs delaylog hanged up
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 26 Nov 2010 15:20:59 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4CEEF275.7090800@xxxxxxxxxxxxx>
References: <4CEAC412.9000406@xxxxxxxxxxxxx> <20101122232929.GJ13830@dastard> <4CEBA2D5.2020708@xxxxxxxxxxxxx> <20101123204609.GW22876@dastard> <4CEEF275.7090800@xxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Fri, Nov 26, 2010 at 12:34:13AM +0100, Spelic wrote:
> On 11/23/2010 09:46 PM, Dave Chinner wrote:
> >...
> >I note that the load is
> >generating close to 10,000 iops on my test system, so it may very
> >well be triggering load related problems in your raid controller...
> 
> Dave thanks for all explanations on the BBWC,
> 
> I wanted to ask how can you measure that it's 10,000 IOPS with that
> workload. Is it by iostat -x ?

http://marc.info/?l=linux-fsdevel&m=129013629728687&w=2

> but only for a few shots of iostat, not for the whole run of the
> "benchmark". Do you mean you have 10000 averaged over the whole
> benchmark?

It peaked at over 10,000 iops, lowest rate was ~4000iops and the
average would have been around 7000iops.

> Also I'm curious, do you remember how much time does it take to
> complete one run (10 parallel tar unpacks) on your 12-disk raid0 +
> BBWC?

33 seconds, with it being limited by the decompression rate (i.e. CPU
bound).

> Probably a better test would excluding the unbzip2 part from the
> benchmark, like the following but it probably won't make more than
> 10sec difference:
> 
> /perftest/xfs# bzcat linux-2.6.37-rc2.tar.bz2 > linux-2.6.37-rc2.tar
> /perftest/xfs# mkdir dir{1,2,3,4,5,6,7,8,9,10}
> /perftest/xfs# for i in {1..10} ; do time tar -xf
> linux-2.6.37-rc2.tar -C dir$i & done ; echo waiting now ; time wait;
> echo syncing now ; time sync

I'm currently running qa test right now on my test machine, so I
don't have a direct comparison with the above number for you.

However, my workstation has a pair of 120GB sandforce 1200 SSDs in
RAID0 running 2.6.37-rc1 w/ delaylog and the results are 40s for the
compressed tarball and 16s for the uncompressed tarball.

The uncompressed tarball run had lower IOPS and much higher
bandwidth as much more merging in the IO elevators was being done
compared to the compressed tarball...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>