xfs
[Top] [All Lists]

direct io performance problem

To: "xfs mailing list" <linux-xfs@xxxxxxxxxxx>
Subject: direct io performance problem
From: "Gabor Forgacs" <gabor@xxxxxxxxxxxxxx>
Date: Thu, 6 Mar 2003 15:21:04 +0100
Organization: Colorfront
Reply-to: "Gabor Forgacs" <gabor@xxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Hi all

we are testing the disk speed on linux, here how it is going:

Test System

kernel:
2.4.20 and 2.4.18

hardware:
IBM Z-PRO , qlogic 2342 driver 6.4, 7-7 disks

md parameters:
chunck_size 32

XFS:
i used the latest cvs version of the xfs (2.0)

file sequence  test were done with 12mb files whitout any previuos cached
buffer
all the files were read with one request (that is what we need)

- xfs filesystem with direct-io with one thread reading can scale up to
220-230 Mb/sec with relativly low cpu load
- xfs filesystem with direct-io with more than one thread is only capable
~100 mb/sec here probably there is some locking issue in the filesystem

- xfs without direct-io can produce ~260-280 mb/sec with pretty high cpu
load ,here the cpu load is probably related to the caching

- ext3 is about the same like xfs without direct-io

So right now, we can do fast diskio but unable to do anything else because
it ruins the disk performance.
What is your experience did you manage to get higher disk speed values on
linux ? Any idea what else could we try?

the same system under windos 2000 can do a sustain 300/310 mb/sec i suspect
there is some bottleneck in the direct io implementation on linux.
if the cached version can do 260-280 mb/sec with the extra memcpy then the
driver should be fast enough to do the task.
is there any recommendation to get the same io rate like on windows ?

Thank You
gabor forgacs



<Prev in Thread] Current Thread [Next in Thread>