I am constructing a large, high performance U320 raid0 array with XFS
using the 2.4.20-rc1-xfs kernel and linux software raid. I am pleased with
the performance, but it seems that when I exceed 14 disks total in the
single raid0 array, performance becomes erratic. With 14 disks and under,
the extents are laid out nicely in incrementing order very close to each
other (according to xfs_bmap) and I can get very large MB/s numbers using 4
U320 SCSI channels and 73GB disks. However, with 15+ disks, I am seeing
large, erratic gaps in the extents, which is seriously affecting read/write
performance. I don't exceed 5 drives per channel, and the problem seems to
exist no matter what SCSI configuration that I use. I have tried various
numbers of channels and disks per channel, but the problem remains when I
exceed 14 disks. Currently the read performance for 15 disks is about half
the read performance for 14 disks in the array.
Other filesystems tested (reiserfs and ext2/3) do not seem to suffer
from this problem, but they also don't produce the awesome speed that the
XFS filesystem does.
I plan on experimenting with the latest 2.4 and 2.5 versions of the XFS
kernel as soon as I can get a good copy from CVS.
Any help is appreciated. Thanks.
The new MSN 8: smart spam protection and 2 months FREE*