[Top] [All Lists]

Re: U320 Large Array Performance

To: Rick Smith <rgsmith72@xxxxxxxxxxx>
Subject: Re: U320 Large Array Performance
From: Steve Lord <lord@xxxxxxx>
Date: 04 Feb 2003 16:16:05 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <F50ltFlmmlQ080Gxw8Z00024707@xxxxxxxxxxx>
References: <F50ltFlmmlQ080Gxw8Z00024707@xxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Tue, 2003-02-04 at 16:09, Rick Smith wrote:
>     I am constructing a large, high performance U320 raid0 array with XFS 
> using the 2.4.20-rc1-xfs kernel and linux software raid. I am pleased with 
> the performance, but it seems that when I exceed 14 disks total in the 
> single raid0 array, performance becomes erratic. With 14 disks and under, 
> the extents are laid out nicely in incrementing order very close to each 
> other (according to xfs_bmap) and I can get very large MB/s numbers using 4 
> U320 SCSI channels and 73GB disks. However, with 15+ disks, I am seeing 
> large, erratic gaps in the extents, which is seriously affecting read/write 
> performance. I don't exceed 5 drives per channel, and the problem seems to 
> exist no matter what SCSI configuration that I use. I have tried various 
> numbers of channels and disks per channel, but the problem remains when I 
> exceed 14 disks. Currently the read performance for 15 disks is about half 
> the read performance for 14 disks in the array.
>     Other filesystems tested (reiserfs and ext2/3) do not seem to suffer 
> from this problem, but they also don't produce the awesome speed that the 
> XFS filesystem does.
>     I plan on experimenting with the latest 2.4 and 2.5 versions of the XFS 
> kernel as soon as I can get a good copy from CVS.
>     Any help is appreciated. Thanks.

Sounds like you need to play with mkfs options on XFS. Can you send
the output of xfs_info /mnt where /mnt is the mounted filesystem.



Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx

<Prev in Thread] Current Thread [Next in Thread>