xfs
[Top] [All Lists]

On RAID, inode size, stripe size (was: Playing around with NFS+XFS)

To: Dan Yocum <yocum@xxxxxxxx>
Subject: On RAID, inode size, stripe size (was: Playing around with NFS+XFS)
From: Federico Sevilla III <jijo@xxxxxxxxxxxxxxxxxxxx>
Date: Wed, 5 Sep 2001 23:19:40 +0800 (PHT)
Cc: Linux XFS Mailing List <linux-xfs@xxxxxxxxxxx>, "Philippine Linux Users' Group Mailing List" <plug@xxxxxxxxxxxxxxxxx>
In-reply-to: <3B953C69.BBACF47@fnal.gov>
Sender: owner-linux-xfs@xxxxxxxxxxx
Dan,
(cc PLUG, XFS mailing lists)

On Tue, 4 Sep 2001 at 15:41, yocum@xxxxxxxx wrote:
> That's the best you'll get out of your 3ware 6400 card under RAID5, so
> your bottleneck is the card, not the network, here.  :-(

Ugh! Ouch. Boohoohoo. I presume this has something to do with the on-board
controller and/or the algorithm they use? :(

> If you want good performance out of a 3ware card, use RAID 1 or 10 (if
> you have a 6x00 card) or get a 7x10, which will do about 17MB/s in
> RAID5.  It's still not great, but a lot better than 6MB/s.  RAID1/10
> on the 7810 is >>100MB/s for writes, and about 180MB/s reads.

D*mn! Information like this makes me wonder why I chose RAID5 anyway. I
guess it's because I didn't have access to such information before making
the decision, and with RAID5 I get significantly more disk space so I
said, what the heck. Perhaps I will find the time to take the server
offline in the not-so-far-away future to back up all the data onto some
hard drives and then re-do the RAID so that it'll be RAID10, which aside
from being much faster, is more fault tolerant in that as long as you
don't get the right pair, it can handle two drives down.

Maybe you're the expert for this: what is RAID5 good for then? And no, I
don't think I can afford to upgrade to RAID50 like you. <envy> ;>

> So, here's what I get for performance on NFSv3 over gigabit ethernet
> to XFS (I didn't tweak the r/wmem_default values, only the r/wmem_max.

I presume only tweaking the r/wmem_max is safer than meddling with the
defaults?

> The XFS volume is RAID50, hw RAID5, then sw RAID0 (striped), hence the
> reason I can get >17MB/s.  I used a 512kb chunksize for the sw RAID0,
> but I think I might be able to get better performance if I used 448kb.
> *and* no data/inode corruption now that '-i size=512' now.

Would you mind explaining to my young mind how using an inode size of 512
bytes protects from data/inode corruption with hardware RAID5? Will this
be significant for other setups (hardware RAID10, software RAID, no RAID
at all)? Does this have any major disk space or performance impacts?

Also with the 3ware 6x00 controllers, RAID5 is limited to 64K stripe size,
while RAID0 and RAID10 use 64K, 128K, 256K, 512K or 1M stripe size. How
does the stripe size of hardware RAID affect performance and disk
utilization? And how can/should the filesystem be tuned to "match" the
stripe size of hardware RAID?

I'm full of questions. Thanks a lot for the info (that you already gave
and that you will hopefully continue to give, hehehe). ;>

 --> Jijo

--
Federico Sevilla III  :: jijo@xxxxxxxxxxxxxxxxxxxx
Network Administrator :: The Leather Collection, Inc.
GnuPG Key: <http://jijo.leathercollection.ph/jijo.gpg>


<Prev in Thread] Current Thread [Next in Thread>