xfs
[Top] [All Lists]

Re: Verify filesystem is aligned to stripes

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: Verify filesystem is aligned to stripes
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 25 Nov 2010 21:15:37 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4CEE0995.9030900@xxxxxxxxxxxxxxxxx>
References: <4CED5BFC.8000906@xxxxxxxxxxxxx> <20101125054607.GM13830@dastard> <4CEE0995.9030900@xxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Thu, Nov 25, 2010 at 01:00:37AM -0600, Stan Hoeppner wrote:
> Dave Chinner put forth on 11/24/2010 11:46 PM:
> 
> > Because writes for workloads like this are never full stripe writes.
> > Hence reads must be done to pullin the rest of the stripe before the
> > new parity can be calculated. This RMW cycle for small IOs has
> > always been the pain point for stripe based parity protection. If
> > you are doing lots of small IOs, RAID1 is your friend.
> 
> Do you really mean RAID1 here Dave, or RAID10?  If RAID1, please
> elaborate a bit.

RAID10 is just a convenient way of saying "striped mirrors" or
"mirrored stripes". Fundamentally they are still using RAID1 for
redundancy - a mirror of two devices. A device could be a single
drive or a stripe of drives.

> RAID1 traditionally has equal read performance to a
> single device, and half the write performance of a single device.

A good RAID1 implementation typically has the read performance of
two devices (i.e. it can read from both legs simultaneously) and the
write performance of a single device.

Parity based RAID is only fast for large write IOs or small IOs that
are close enough together that a stripe cache can coalesce them into
large writes. If this can't be acheived, parity based raid will be
no faster than a _single drive_ for writes because all drives will
be involved in RMW cycles. Indeed, I've seen RAID5 luns be saturated
at only 50 iops because every IO required a RMW cycle, while an
equivalent number of drives using RAID1 of RAID0 stripes did 1,000
iops...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>