xfs
[Top] [All Lists]

Re: Performance Characteristics of All Linux RAIDs (mdadm/bonnie++)

To: Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx>
Subject: Re: Performance Characteristics of All Linux RAIDs (mdadm/bonnie++)
From: Chris Snook <csnook@xxxxxxxxxx>
Date: Wed, 28 May 2008 11:40:24 -0400
Cc: linux-kernel@xxxxxxxxxxxxxxx, linux-raid@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
In-reply-to: <alpine.DEB.1.10.0805280442330.4527@xxxxxxxxxxxxxxxx>
References: <alpine.DEB.1.10.0805280442330.4527@xxxxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080501)
Justin Piszcz wrote:
Hardware:

1. Utilized (6) 400 gigabyte sata hard drives.
2. Everything is on PCI-e (965 chipset & a 2port sata card)

Used the following 'optimizations' for all tests.

# Set read-ahead.
echo "Setting read-ahead to 64 MiB for /dev/md3"
blockdev --setra 65536 /dev/md3

# Set stripe-cache_size for RAID5.
echo "Setting stripe_cache_size to 16 MiB for /dev/md3"
echo 16384 > /sys/block/md3/md/stripe_cache_size

# Disable NCQ on all disks.
echo "Disabling NCQ on all disks..."
for i in $DISKS
do
  echo "Disabling NCQ on $i"
  echo 1 > /sys/block/"$i"/device/queue_depth
done

Given that one of the greatest benefits of NCQ/TCQ is with parity RAID, I'd be fascinated to see how enabling NCQ changes your results. Of course, you'd want to use a single SATA controller with a known good NCQ implementation, and hard drives known to not do stupid things like disable readahead when NCQ is enabled.

-- Chris


<Prev in Thread] Current Thread [Next in Thread>