xfs
[Top] [All Lists]

Re: Performance problem - reads slower than writes

To: Brian Candler <B.Candler@xxxxxxxxx>
Subject: Re: Performance problem - reads slower than writes
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sat, 04 Feb 2012 06:49:23 -0600
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20120204112436.GA3167@xxxxxxxx>
References: <20120130220019.GA45782@xxxxxxxx> <20120131020508.GF9090@dastard> <20120131103126.GA46170@xxxxxxxx> <20120131145205.GA6607@xxxxxxxxxxxxx> <20120203115434.GA649@xxxxxxxx> <4F2C38BE.2010002@xxxxxxxxxxxxxxxxx> <20120203221015.GA2675@xxxxxxxx> <4F2D016C.9020406@xxxxxxxxxxxxxxxxx> <20120204112436.GA3167@xxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
On 2/4/2012 5:24 AM, Brian Candler wrote:
> On Sat, Feb 04, 2012 at 03:59:08AM -0600, Stan Hoeppner wrote:
>> Will you be using mdraid or hardware RAID across those 24 spindles?
> 
> Gluster is the front-runner at the moment. Each file sits on a single
> spindle, and there is a separate filesystem per spindle, so I think the
> parallel processing will work much better this way. This does mean double
> the disks to get data replication though.

Apparently you've read of a different GlusterFS.  The one I know of is
for aggregating multiple storage hosts into a cloud storage resource.
It is not designed to replace striping or concatenation of disks within
a single host.

Even if what you describe can be done with Gluster, the performance will
likely be significantly less than a properly setup mdraid or hardware
raid.  Again, if it can be done, I'd test it head-to-head against RAID.

> I did some testing of RAID6 mdraid (12 disks with with 1MB stripe size) and
> it sucked.  However I need to re-test it now that I know about inode64.
> We do have a requirement for archival storage and that might use RAID6.

I've never been a fan of parity RAID, let alone double parity RAID.
SATA drives are so cheap (or were until the flooding in Thailand) that
it's really hard to justify RAID6 over RAID10 or a layered stripe over
mirror, given the many advantages of RAID10 and negligible
disadvantages.  The RAID6 dead drive rebuild time, and performance
degradation during the rebuild, on a production system with real users,
is enough justification to go RAID10, where that drive rebuild will take
many many hours less, if not days less, and degrade performance only mildly.

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>