xfs
[Top] [All Lists]

Re: U320 Large Array Performance

To: Rick Smith <rgsmith72@xxxxxxxxxxx>
Subject: Re: U320 Large Array Performance
From: Steve Lord <lord@xxxxxxx>
Date: 04 Feb 2003 17:45:18 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <F69YSAwpxevpPsdZKWp00024c9d@xxxxxxxxxxx>
Organization:
References: <F69YSAwpxevpPsdZKWp00024c9d@xxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Tue, 2003-02-04 at 16:47, Rick Smith wrote:
> Steve,
>       Here is the output from xfs_info, please excuse the formatting. I am 
> using 
> a chunk size of 4K in the raid. Larger sizes seem to degrade performance. 
> Thanks for your help.

I was going to have a chat with someone about this, but they are gone
for the day, so stripe suggestions will have to wait a while.

Do you have any way of measuring the I/O going on to each drive?
There are ways of making a filesystem where the allocation headers
for each allocation group land on the same device and it cripples
us. Not sure which version of the mkfs code is in linux right now
to be honest. If you can monitor it then you would see this happening.
Messing with -d agsize=xxx can be used to fix this up.

Try adding -b size=4096 -d agsize=1048559b to the 15 disk case,
this should make the allocation groups a multiple of 
your stripe width - 1. This will cause the allocation group
headers to round robin through the drives. This is a bit of
a shot in the dark. Note the math here is to make agsize+1
a multiple of the number of drives you have which is a bit
less than 4Gbytes. This is a little bit of a shot in the
dark, but it may help.

Also, you have -d unwritten=1, this will not be doing you any good,
and can in fact cause problems with some forms of mmaped I/O.

More tomorrow.

Steve

> 
> With 14 disks in the array:
> meta-data=/test                  isize=256    agcount=240, agsize=1048576 
> blks
> data                =                       bsize=4096   blocks=251392736, 
> imaxpct=25
>                         =                       sunit=1      swidth=14 blks, 
> unwritten=1
> naming   =version 2             bsize=4096
> log      =external                    bsize=4096   blocks=32768 version=1
>                         =                       sunit=0 blks
> realtime =none                     extsz=57344  blocks=0, rtextents=0
> 
> With 15 disks in the array
> meta-data=/test                  isize=256    agcount=257, agsize=1048576 
> blks
> data                =                       bsize=4096   blocks=269349360, 
> imaxpct=25
>                         =                       sunit=1      swidth=15 blks, 
> unwritten=1
> naming   =version 2             bsize=4096
> log      =external                    bsize=4096   blocks=32768 version=1
>                        =                        sunit=0 blks
> realtime =none                     extsz=61440  blocks=0, rtextents=0
> 
> >From: Steve Lord <lord@xxxxxxx>
> >To: Rick Smith <rgsmith72@xxxxxxxxxxx>
> >CC: linux-xfs@xxxxxxxxxxx
> >Subject: Re: U320 Large Array Performance
> >Date: 04 Feb 2003 16:16:05 -0600
> >
> >On Tue, 2003-02-04 at 16:09, Rick Smith wrote:
> > >     I am constructing a large, high performance U320 raid0 array with 
> >XFS
> > > using the 2.4.20-rc1-xfs kernel and linux software raid. I am pleased 
> >with
> > > the performance, but it seems that when I exceed 14 disks total in the
> > > single raid0 array, performance becomes erratic. With 14 disks and 
> >under,
> > > the extents are laid out nicely in incrementing order very close to each
> > > other (according to xfs_bmap) and I can get very large MB/s numbers 
> >using 4
> > > U320 SCSI channels and 73GB disks. However, with 15+ disks, I am seeing
> > > large, erratic gaps in the extents, which is seriously affecting 
> >read/write
> > > performance. I don't exceed 5 drives per channel, and the problem seems 
> >to
> > > exist no matter what SCSI configuration that I use. I have tried various
> > > numbers of channels and disks per channel, but the problem remains when 
> >I
> > > exceed 14 disks. Currently the read performance for 15 disks is about 
> >half
> > > the read performance for 14 disks in the array.
> > >     Other filesystems tested (reiserfs and ext2/3) do not seem to suffer
> > > from this problem, but they also don't produce the awesome speed that 
> >the
> > > XFS filesystem does.
> > >     I plan on experimenting with the latest 2.4 and 2.5 versions of the 
> >XFS
> > > kernel as soon as I can get a good copy from CVS.
> > >     Any help is appreciated. Thanks.
> >
> >Sounds like you need to play with mkfs options on XFS. Can you send
> >the output of xfs_info /mnt where /mnt is the mounted filesystem.
> >
> >Steve
> >
> >
> >--
> >
> >Steve Lord                                      voice: +1-651-683-3511
> >Principal Engineer, Filesystem Software         email: lord@xxxxxxx
> 
> 
> _________________________________________________________________
> Help STOP SPAM with the new MSN 8 and get 2 months FREE*  
> http://join.msn.com/?page=features/junkmail
-- 

Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx


<Prev in Thread] Current Thread [Next in Thread>