xfs
[Top] [All Lists]

Re: Looking for Linux XFS file system performance tuning tips for LSI927

To: rkj@xxxxxxxxxxxx
Subject: Re: Looking for Linux XFS file system performance tuning tips for LSI9271-8i + 8 SSD's RAID0
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Mon, 4 Feb 2013 23:52:34 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <courier.510ECA60.00003A99@xxxxxxxxxxxx>
References: <courier.510ECA60.00003A99@xxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sun, Feb 03, 2013 at 01:36:48PM -0700, rkj@xxxxxxxxxxxx wrote:
> 
> I am working with hardware RAID0 using LSI 9271-8i + 8 SSD's.  I am
> using CentOS 6.3 on a Supermicro X9SAE-V motherboard with Intel Xeon
> E3-1275V2 CPU and 32GB 1600 MHz ECC RAM.  My application is fast
> sensor data store and forward with UDP based file transfer using
> multiple 10GbE interfaces.  So I do not have any concurrent loading,
> I am mainly interested in optimizing sequential read/write
> performance.
>
> Raw performance as measured by Gnome Disk Utility is around 4GB/s
> sustained read/write.

I don't know what that does - probably lots of concurrent IO to drive
deep queue depths to get the absolute maximum possible from the
device....

> With XFS buffer IO, my sequential writes max
> out at about  2.5 GB/s.

CPU bound on single threaded IO, I'd guess.

> With Direct IO, the sequential writes are
> around 3.5 GB/s but I noticed a drop-off in sequential reads for
> smaller record sizes.

Almost certainly IO latency bound on single threaded IO.

> I am trying to get the XFS sequential
> read/writes as close to 4 GB/s as possible.

Time to go look up how to use async IO or multithreaded direct
IO.

FWIW, the best benchmark is your application - none of what you've
talked about even come close to modelling the data flow a
network-disk-network store-and-forward system needs, and a data
rates of 4GB/s you are going to benchmark the network devices
flowing data at the same time you do disk IO....

> I have documented all of the various mkfs.xfs options I have tried,
> fstab mount options, iozone results, etc. in this forum thread:

Configuration changes won't make any difference to data IO latency
or CPU usage. IOWs, SSDs don't magically solve the problem of having
to optimise the way the applications/benchmarks do IO and so no
amount of tweaking the filesystem will get you to your goal if the
application is deficient...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>