[Top] [All Lists]

Re: Xfs delaylog hanged up

To: xfs@xxxxxxxxxxx
Subject: Re: Xfs delaylog hanged up
From: Spelic <spelic@xxxxxxxxxxxxx>
Date: Fri, 26 Nov 2010 00:34:13 +0100
In-reply-to: <20101123204609.GW22876@dastard>
References: <4CEAC412.9000406@xxxxxxxxxxxxx> <20101122232929.GJ13830@dastard> <4CEBA2D5.2020708@xxxxxxxxxxxxx> <20101123204609.GW22876@dastard>
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20100713 Thunderbird/3.0.6
On 11/23/2010 09:46 PM, Dave Chinner wrote:
I note that the load is
generating close to 10,000 iops on my test system, so it may very
well be triggering load related problems in your raid controller...

Dave thanks for all explanations on the BBWC,

I wanted to ask how can you measure that it's 10,000 IOPS with that workload. Is it by iostat -x ?

If yes, what cell do you exactly look at and what is the period you use for averaging values? I also can sometimes see values up to around 10000 in the cell "w/s" that corresponds to my MD RAID array (currently a 16 disk raid-5 with XFS delaylog), if I use

iostat -x 10      (this averages write IOPS on 10 seconds I think)

but only for a few shots of iostat, not for the whole run of the "benchmark". Do you mean you have 10000 averaged over the whole benchmark?

Also I'm curious, do you remember how much time does it take to complete one run (10 parallel tar unpacks) on your 12-disk raid0 + BBWC?

Probably a better test would excluding the unbzip2 part from the benchmark, like the following but it probably won't make more than 10sec difference:

/perftest/xfs# bzcat linux-2.6.37-rc2.tar.bz2 > linux-2.6.37-rc2.tar
/perftest/xfs# mkdir dir{1,2,3,4,5,6,7,8,9,10}
/perftest/xfs# for i in {1..10} ; do time tar -xf linux-2.6.37-rc2.tar -C dir$i & done ; echo waiting now ; time wait; echo syncing now ; time sync

Thanks for all explanations

<Prev in Thread] Current Thread [Next in Thread>