xfs
[Top] [All Lists]

Re: XFS performance issues: O_DIRECT and Linux 2.6.6+

To: Nathan Scott <nathans@xxxxxxx>
Subject: Re: XFS performance issues: O_DIRECT and Linux 2.6.6+
From: James Foris <james.foris@xxxxxxxxxx>
Date: Tue, 21 Sep 2004 14:10:40 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20040915083307.GA14251@frodo>
References: <411A8410.2030000@xxxxxxxxxx> <20040910041106.GA14336@frodo> <4144B19A.2020407@xxxxxxxxxx> <4145D141.1040907@xxxxxxxxxx> <20040914095914.A4118499@xxxxxxxxxxxxxxxxxxxxxxxx> <41472212.1090605@xxxxxxxxxx> <20040915015002.GA12795@frodo> <20040915083307.GA14251@frodo>
Reply-to: james.foris@xxxxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7) Gecko/20040624
Nathan Scott wrote:
Hi there,




Could you try the patch below James?  It should apply
cleanly to the 2.6.x-xfs cvs tree on oss.sgi.com, or to
Linus' current -bk tree (but that may need a little bit
of tweaking, not sure off the top of my head).

Let me know if the numbers are good/bad/indifferent (or
if you see any hangs etc - I really need to stare at the
locking in here for a whole lot longer).

I am sorry about taking so long to get back, but the numbers
with the high-performance RAID look very good:

WITH O_DIRECT:
MB,num writes,MB/write,dt total,dt write,dt sync,MB/s,pct disk full,pct memory 
full
1024.000, 32,32.000,2.499,2.499,0.000,409.841,1.95%,99.55%
1024.000, 32,32.000,2.478,2.478,0.000,413.225,2.20%,99.56%
1024.000, 32,32.000,2.513,2.513,0.000,407.523,2.44%,99.56%
1024.000, 32,32.000,2.486,2.486,0.000,411.846,2.68%,99.56%
1024.000, 32,32.000,2.471,2.471,0.000,414.381,2.93%,99.56%
1024.000, 32,32.000,2.469,2.469,0.000,414.772,3.17%,99.56%
1024.000, 32,32.000,2.486,2.486,0.000,411.831,3.41%,99.56%
1024.000, 32,32.000,2.488,2.488,0.000,411.567,3.66%,99.56%
1024.000, 32,32.000,2.509,2.509,0.000,408.145,3.90%,99.56%
1024.000, 32,32.000,2.482,2.482,0.000,412.499,4.15%,99.56%
10 tests overall average write 2.488 sec; 411.550 MB/s

WITHOUT O_DIRECT:
MB,num writes,MB/write,dt total,dt write,dt sync,MB/s,pct disk full,pct memory 
full
1024.000, 32,32.000,3.569,3.273,0.296,286.913,4.39%,99.55%
1024.000, 32,32.000,3.739,3.412,0.327,273.858,4.63%,99.41%
1024.000, 32,32.000,3.670,3.421,0.248,279.033,4.88%,99.52%
1024.000, 32,32.000,3.721,3.430,0.291,275.197,5.12%,99.44%
1024.000, 32,32.000,3.656,3.413,0.243,280.108,5.37%,99.41%
1024.000, 32,32.000,3.791,3.457,0.334,270.124,5.61%,99.45%
1024.000, 32,32.000,3.728,3.457,0.271,274.707,5.85%,99.51%
1024.000, 32,32.000,3.720,3.449,0.271,275.279,6.10%,99.40%
1024.000, 32,32.000,3.763,3.462,0.301,272.112,6.34%,99.47%
1024.000, 32,32.000,3.747,3.483,0.264,273.251,6.58%,99.36%
10 tests overall average write 3.710 sec; 275.984 MB/s

To recap the above, we now jump from 276 MBytes/sec sustained to
411 MBytes/sec sustained...... which is what we had originally hoped
for.

Any idea how long befor the patch makes its way into Liunus' tree ?

Thanks again,

Jim Foris


thanks!



<Prev in Thread] Current Thread [Next in Thread>