xfs
[Top] [All Lists]

Re: Read performance issues with 2.6.0test5?

To: Frank Hellmann <frank@xxxxxxxxxxxxx>
Subject: Re: Read performance issues with 2.6.0test5?
From: Austin Gonyou <austin@xxxxxxxxxxxxxxx>
Date: Mon, 22 Sep 2003 11:37:20 -0500
Cc: XFS List <linux-xfs@xxxxxxxxxxx>
In-reply-to: <3F6EFF6E.9040005@xxxxxxxxxxxxx>
Organization: Coremetrics, Inc.
References: <3F6EFF6E.9040005@xxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Mon, 2003-09-22 at 08:55, Frank Hellmann wrote:
> Hi!
> 
> I am currently trying out kernel 2.6.0test5.1.36 (the one from the 
> Redhat site). After patching in the latest QLogic FibreChannel drivers
> (8.0.0.4) and running performance tests I was wondering about the 
> results. The machine consists of two dual channel PCI-X QLogic FC 
> Controllers and 4 Infortrend diskarrays with ~2TB each. The arrays are

I actually have an Infortrend. So this is interesting to me. I have
qla2300's and Qla2200's on 4 different machines. All devices are
attached to SilkWorm 2800 switches and I have two large volumes
presented from the Infortrend. This means that I have hardware level,
six way striping. This is a RAID0, it's a backup volume, with Direct IO,
not caching.We can read and write 2GB files with 8K block size in 16
seconds both ways. Same test that you have below, just a different
kernel. I agree, it's a different test at that level, but read below for
a couple more suggestions.

> hardpartitioned (inside the arrays) to 512GB chunks and I am currently
> striping them together via /dev/md0 with 2TB.
> 
> Kernel 2.6.0test5.1.36custom:
> 
> Writing (time -p dd if=/dev/zero of=/xine1/zeros bs=1M count=4K)
> Real: 11.00
> User: 0.02
> Sys: 10.60

Your BS = 1M, but what is the controller set to as far as FC frames are
concerned?

> Reading (time -p dd if=/xine1/zeros of=/dev/null bs=1M count=4K)
> Real: 45.15
> User: 0.01
> Sys: 7.01

Your BS = 1M, but what is the controller set to as far as FC frames are
concerned? Be aware that you'll have to remove the MD as a bottleneck as
well. i.e. just format a 2 TB volume from the IFT, and perform the same
test with no other layers in between. Also, what FS mount options are
you using? biosize, logbufs, noatime?

> That is a bit weird result, isn't it? On the hand-patched 2.4.20-18.9 
> Kernel with XFS 1.3.0pre4 I'll get much better read performance and 
> about similar write performance. I checked if there are any issues 
> regarding the internal log and tried an external one, but not much of
> a change with that.

I agree that these results are odd. Something is amiss here, but there
really are many parts to remove before signifying one or the other. You
could always format that MD volume you made as EXT3 or Reiser and
perform the same test.

> I am not to proficient in using the kernel profiling tools to see
> where 
> the performance goes, so maybe someone could give me a hint what to
> look 
> for.
> 
>         Cheers,
>                         Frank...

-- 
Austin Gonyou <austin@xxxxxxxxxxxxxxx>
Coremetrics, Inc.


<Prev in Thread] Current Thread [Next in Thread>