[Top] [All Lists]

Re: Performance problem - reads slower than writes

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: Performance problem - reads slower than writes
From: Brian Candler <B.Candler@xxxxxxxxx>
Date: Sat, 4 Feb 2012 11:24:36 +0000
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date:from:to :cc:subject:message-id:references:mime-version:content-type :in-reply-to; s=sasl; bh=uJuGMPGwUux4dmTfl9Qfkga7ojg=; b=k6MqhF1 M9K+tmUapOP4Sy2KyEiso1Q5p06DQcDBUEBQpIA91ox6tPYtsuo0T2LmxIyN9oO/ Qoo19ZcxBquc9BOLy33ZWpPs/M0Pt1rz1JIvr/ckRa9Jf4M3f8g5mZGXk1bUAGto sesT0vY+QIUisnw7X8Jv6MGUhhiCx32ulIuk=
Domainkey-signature: a=rsa-sha1; c=nofws; d=pobox.com; h=date:from:to:cc :subject:message-id:references:mime-version:content-type :in-reply-to; q=dns; s=sasl; b=yUhgId5sAXHyg6/x0jLtU93Nz8W/F8/lA ZQ/l+MjYqr3SBnfTe9KTzD6z2P5rYPZkk3tdAWtswu+D86+vxC7M/G/5YijnAikq 39rX1yeGu6cPu8nal/qvZalTfNFgHBTg7oK5oODL/EpKa0pCNaPeat0Eu4QvPE4k VZXqwJ3wdg=
In-reply-to: <4F2D016C.9020406@xxxxxxxxxxxxxxxxx>
References: <20120130220019.GA45782@xxxxxxxx> <20120131020508.GF9090@dastard> <20120131103126.GA46170@xxxxxxxx> <20120131145205.GA6607@xxxxxxxxxxxxx> <20120203115434.GA649@xxxxxxxx> <4F2C38BE.2010002@xxxxxxxxxxxxxxxxx> <20120203221015.GA2675@xxxxxxxx> <4F2D016C.9020406@xxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sat, Feb 04, 2012 at 03:59:08AM -0600, Stan Hoeppner wrote:
> Will you be using mdraid or hardware RAID across those 24 spindles?

Gluster is the front-runner at the moment. Each file sits on a single
spindle, and there is a separate filesystem per spindle, so I think the
parallel processing will work much better this way. This does mean double
the disks to get data replication though.

I did some testing of RAID6 mdraid (12 disks with with 1MB stripe size) and
it sucked.  However I need to re-test it now that I know about inode64.
We do have a requirement for archival storage and that might use RAID6.



<Prev in Thread] Current Thread [Next in Thread>