OK, I know we're off topic of the list (XFS) so lets take this to the
ide-array list (majordomo@xxxxxxxxxxxxxxxxx), after this response.
Federico Sevilla III wrote:
> (cc PLUG, XFS mailing lists)
> On Tue, 4 Sep 2001 at 15:41, yocum@xxxxxxxx wrote:
> > That's the best you'll get out of your 3ware 6400 card under RAID5, so
> > your bottleneck is the card, not the network, here. :-(
> Ugh! Ouch. Boohoohoo. I presume this has something to do with the on-board
> controller and/or the algorithm they use? :(
I talked to one of the 3ware VPs one day and he said that a) the 6x00 CPU is
way underpowered to do RAID5 parity calculations at a decent speed, b)
there's no on-board cache, and c) they've had troubles with the FPGAs on the
cards. How the last item affects write speed I don't know, maybe he was
complaining about their reliability. Anyway, the 7810 has a faster CPU and
uses APICs instead of FPGAs, but still doesn't have an on-board cache. The
6x00 cards were made RAID5 capable through a firmware update after Promise
came out with a RAID5 card. It was clearly a "me too!" reaction to the
> Maybe you're the expert for this: what is RAID5 good for then? And no, I
We're more concerned with maximum filesystem size and _read_ speed since
we'll write once, read *a lot*. Read speed on the 6x00 cards isn't terrible
(85MB/s IIRC) but is even better under the 7x10 cards (180MB/s!!).
> > So, here's what I get for performance on NFSv3 over gigabit ethernet
> > to XFS (I didn't tweak the r/wmem_default values, only the r/wmem_max.
> I presume only tweaking the r/wmem_max is safer than meddling with the
To be honest, I forgot to change the r/wmem_default values when I did the
> > *and* no data/inode corruption now that '-i size=512' now.
(hm. note to self: wipe out and erradicate superfluous redundancies)
> Would you mind explaining to my young mind how using an inode size of 512
> bytes protects from data/inode corruption with hardware RAID5? Will this
Yeah, what Steve said. I just threw that comment in there so this thread
sort-of remained on topic. Sorry for the confusion.
Speaking of being off topic, I'm about to post a technical note on the
ide-array list about the systems we've got and what I had to do to get them
to where they are now (which is pretty damn stable). I'm pretty sure I've
got all the major bugs taken care of (with lots of help from mkp and Eric
and Adam at 3ware and....)
> Also with the 3ware 6x00 controllers, RAID5 is limited to 64K stripe size,
> while RAID0 and RAID10 use 64K, 128K, 256K, 512K or 1M stripe size. How
> does the stripe size of hardware RAID affect performance and disk
> utilization? And how can/should the filesystem be tuned to "match" the
> stripe size of hardware RAID?
No effect on utilization. Definitely a factor in speed, but specifics I
don't know. In fact, I'm not sure any other RAID card manufacturer allows
for anything other than 64k stripe size under RAID 5, but my RAID experience
is admittedly limited.
> I'm full of questions. Thanks a lot for the info (that you already gave
> and that you will hopefully continue to give, hehehe). ;>
"And my consulting fee is.... " ;-)
I'm just happy to have systems that I can turn on and forget at the moment.
Get me back into the commercial world, and my tune may change. But, for the
time being, I live in a purely open source, open information Xanadu! I have
achieved Nirvana. Hooooommmmmmeeee, Hoooooommmmmmmeee.
Sloan Digital Sky Survey, Fermilab 630.840.6509
SDSS. Mapping the Universe.