xfs
[Top] [All Lists]

XFS Raid5 in Raid0 poor performance

To: linux-xfs@xxxxxxxxxxx
Subject: XFS Raid5 in Raid0 poor performance
From: Patrick Cole <z@xxxxxxxxxx>
Date: Fri, 6 Jun 2003 14:58:26 +1000
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4i
The problem:

I have a file server which backs up to tape nightly (which is not of
great importance).  It was initially running Ext3 and could write to
tape at mostly native speed (~15MB/s).  I decided I wanted to change
it over to XFS to get online defragmenting and better performance but
the latter has not been the case:

Version 1.02c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
icufs2           4G 11105  98 24850  15 15788  12 10841  98 49210  32 581.4   6
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   431  15 +++++ +++   452  12   518  15 +++++ +++   230   6
icufs2,4G,11105,98,24850,15,15788,12,10841,98,49210,32,581.4,6,16,431,15,+++++,+++,452,12,518,15,+++++,+++,230,6

These are the results of a bonnie test on the machine, and show the 
pooer performance the filesystem is giving in this application.  The reason
for this we suspect is because of the raid setup we are running.  Here is
how it is setup:

Two controllers (3ware Escalade) 8way each

Raid0 (256KB chunk size) {
        Raid5 (32KB chunk size) {
                Drive 1
                Drive 2
                Drive 3
                Drive 4
                Drive 5
                Drive 6
                Drive 7 (Parity)
                Drive 8 (Hot spare)
        }
        Raid5 (32KB chunk size) {
                Drive 1
                Drive 2
                Drive 3
                Drive 4
                Drive 5
                Drive 6
                Drive 7 (Parity)
                Drive 8 (Hot spare)
        }
}

So it's something like raid0(raid5(1-8),raid5(1-8)) each drive being about 120GB
giving a total of around 1.3 TB.
 
Here are the bonnie results from an IDENTICAL machine running Ext3:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
cbisfs2          4G 12224  99 86949  72 46664  54 13741  99 87315  61 582.4   4
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1390  95 +++++ +++ +++++ +++  1452  99 +++++ +++  2981  98
cbisfs2,4G,12224,99,86949,72,46664,54,13741,99,87315,61,582.4,4,16,1390,95,+++++,+++,+++++,+++,1452,99,+++++,+++,2981,98

This is more the kind of performance you would expect out of this kind of setup.
As you can see the block sequential output is almost four times better on Ext3
with this setup, while the block sequential input is almost twice as good.

The other thing worth noting is that the create and delete performance is 
reduced
to next to nothing!  I get 3 x better create performance with XFS on my single 
drive
ATA100 workstation, with slower CPUs.

Some input/ideas/fixes would be appreciated.

Btw, the filesystem was created using xfsprogs 2.0.6.  After talking with 
sandeen
he suggested that the newer utils had some changes relating to setting to the 
sunit 
and stripe perameters of the filesystem. Is this the case?

Cheers

Patrick

-- 
Patrick Cole <Patrick.Cole@xxxxxxxxxx>
Programmer, the John Curtin School of Medical Research, ANU 
Office 02 6125 6794  Mobile 0438 763337
PGP 1024R/60D74C7D C8E0BC7969BE7899AA0FEB16F84BFE5A   


<Prev in Thread] Current Thread [Next in Thread>