On Tue, Jan 09, 2007 at 12:22:12PM +1100, David Chinner wrote:
> On Mon, Jan 08, 2007 at 11:13:43AM -0600, Mr. Berkley Shands wrote:
> > My testbench is a 4 core Opteron (dual 275's) into
> > two LSI8408E SAS controllers, into 16 Seagate 7200.10 320GB satas.
> > Redhat ES4.4 (Centos 4.4). A slightly newer parted is needed
> > than the contemporary of Moses that is shipped with the O/S.
> >
> > I have a standard burn in script that takes the 4 4-drive raid0's
> > and puts a GPT label on them, aligns the partitions to stripe
> > boundary's. It then proceeds to write 8GB files concurrently
> > onto all 4 raid drives.
I just ran up a similar test - single large file per device on a 4
core Xeon (woodcrest) with 16GB RAM, a single PCI-X SAS HBA and
12x10krpm 300GB SAS disks split into 3x4 disk dm raid zero stripes
on 2.6.18 and 2.6.20-rc3.
I see the same thing - 2.6.20-rc3 is more erractic and quite a
bit slower than 2.6.18 when going through XFS.
I suggest trying this on 2.6.20-rc3:
# echo 10 > /proc/sys/vm/dirty_ratio
That restored most of the lost performance and consistency
in my testing....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
|