xfs
[Top] [All Lists]

Re: Slow read performance

To: Nathan Scott <nathans@xxxxxxx>
Subject: Re: Slow read performance
From: Jason Howard <jason@xxxxxxxxxxxxx>
Date: Fri, 20 Aug 2004 15:58:52 -0700
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20040821072805.C3393262@wobbly.melbourne.sgi.com>
Organization: SpectSoft, LLC
References: <200408201342.53409.jason@spectsoft.com> <20040821072805.C3393262@wobbly.melbourne.sgi.com>
Reply-to: jason@xxxxxxxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: KMail/1.6.2
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Friday 20 August 2004 14:28, you wrote:
> On Fri, Aug 20, 2004 at 01:42:53PM -0700, Jason Howard wrote:
> > Hello All,
>
> hi there,
>
> > I am wondering if anyone can offer any insight into a problem I have been
> > seeing with XFS included in the 2.4.27 kernel.  I am seeing a huge
> > difference between a raw device (/dev/md0) read and a filesystem read
> > when reading from an array that is capable of >500 MBytes/sec.  The reads
> > and writes are sequential, both on the raw disk device and XFS filesystem
> > (using sequential files).
>
> Buffered or direct reads?  (could you try both?)

Buffered right now.  I did try direct reads a while back and I don't recall 
them making any difference.  I will give it another shot later this evening.

> Is this MD RAID5?  (if so, could you send xfs_info output for this
> filesystem as well?)

Both arrays (the scsi and fc) are RAID 0 using the linux MD driver.  

The SCSI setup uses a two channel raid 3 array (see 
http://www.hugesystems.com/Products/MVault-U320-RX.cfm).  Each side has 5 
disks using hardware raid 3.  The two channels are then RAID 0ed together 
using the MD driver.

The fibre channel setup uses 16 disks and a controller w/ 2x 2gb interfaces.  
The first six drives are accessed via the first fc interface, and the second 
six are accessed via the second controller.  All 16 drives are RAID 0ed 
together via the Linux MD driver.

> If it is a RAID5 device, make sure your filesystem sector and
> block sizes are the same (both are mkfs.xfs options).

They are.  On the SCSI setup we are using 2K SCSI transfers, so the data and 
log sectorsizes are set to 2K (or 4K).

> Otherwise, not sure... could be a readahead oddity if this is
> buffered IO.  As another data point, what do the ext2 numbers
> look like?  (this will point to an XFS-specific problem, or a
> more generic - eg. readahead - type of problem).

I tested Ext3 and JFS on the array we have here (the SCSI) and, while I'm not 
seeing as good write speeds as XFS, the read speeds are much better than the 
67MB/s I was getting with XFS.  JFS gives me about 150 MB/s on both the read 
and write.  Ext3 give me slightly less.

Thanks much!
Jason

- -- 
 Jason Howard
 
 Professional:  
   SpectSoft, LLC
   593 Hi-Tech Pkwy, Suite B
   Oakdale, CA 95361, USA
   http://www.spectsoft.com
   jason a-t spectsoft.com
   Phone: +1.209.847.7812
   Fax: +1.209.847.7859

 Personal:
   http://www.psinux.org
   jason a-t psinux.org
   Cell: +1.209.968.1289
   Text Message: jasonsphone a-t psinux.org

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBJoIsMghdcvnd+m4RAhZHAJ9TpyeT6Q2V1s9jGeHI976aAEaFpwCgjxPC
aHoxte1lYYbi+ZYe7YlYgBQ=
=+DsN
-----END PGP SIGNATURE-----


<Prev in Thread] Current Thread [Next in Thread>