xfs
[Top] [All Lists]

Re: Pulling disk out of the RAID 5 Array?

To: <linux-xfs@xxxxxxxxxxx>
Subject: Re: Pulling disk out of the RAID 5 Array?
From: "Steve Wolfe" <nw@xxxxxxxxx>
Date: Tue, 18 Dec 2001 14:41:00 -0700
References: <Pine.A41.4.33.0112181319500.84720-100000@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
> Thanks a bunch steve, your review was great...Now about Levels 1, how is
> that a different matter?  (Good/Bad/Etc)

  I was only talking about the benefits of hardware over software in
certain levels, not the swapping or any other merits. : )  In RAID 5, the
checksumming you need to do during writes is pretty substantial, and
having it on a hardware card offloads that from the host computer's CPU.
RAID 0 and 1 don't have the checksumming, so running it in software has a
much, much lower impact on the host CPU.  However, that means that if you
have a truly wimpy processor on the RAID controller, that it can run very
slowly.  I've heard that's the case with the low-end 3ware IDE RAID cards,
their processers can't handle the load of RAID 5 well.  Supposedly, the
higher-end cards have much better processors on them.  The 3ware cards are
targetted as a very-low cost RAID solution, and that has to be taken into
consideration,

>  I found a couple 2U cases with
> two hot swap bays on the front - which might be the most economical
route
> for having a drive redundancy for a server farm - rather than having one
> shared storage for a number of servers.

  First, if you want the drives in the units, you might think about the
BoomRack 2U series C (http://www.boomrack.com/html/2u.htm), it will give
you *4* hot-swappable bays on the front of a 2U chassis.    I've used
their 4U racks, and am very impressed with them.  I'll be trying out some
of the 2U's some time in the future, just not right now. : )  If you buy
them, try to get them from a reseller.  BoomRack charges a LOT if you buy
directly from them, in order to promote purchasing them from a reseller.
A 4U chassis that's around $400 direct from them may only be $230 or so
from a reseller.

  Second, having redundancy in the machines is certainly good to promote
uptime, but centralized storage also has it's benefits.  Backups,
especially, come to mind.  Doing nightly backups of multiple machines
acrosss a network sounds like a very time-consuming process.  I much
prefer to have the DAT/tape/whatever in the file server itself.

  To go even further, it's certainly possible to eliminate the drives from
the servers entirely!  If you wanted, you could either boot from the
network, or to a flash-drive, use a RAM drive for /tmp, and mount
everything else from the file server.  In some situations, that works
well, in others, it's not what you want - but it does certainly promote
uptime at a low cost, avoiding having to buy the drives, RAID controllers,
and special chassis for each of the front-end servers.  Or, if you're
lucky enough to have a load-balancer with automatic failover, you can just
stick a cheap IDE drive in each machine, and forget about it.  If you've
bought decent IDE drives, once every few years, you'll notice that one
machine is out of the rotation, and have to go work on it. ; )

>  I'm definately looking at a
> hardware based solution, I actually want to use the same boxes for a
> couple FreeBSD machines a client is going to need as well.  Between
> mirroring and parity, how do the hotswaps compare?

  I haven't used mirroring, but from my understanding, it should work
smoothly in hardware, and in software, can have it's drawbacks.  The
beauties of hardware RAID (to me, at least) are offloading the processing
from the host CPU, having onboard cache, and that it's a lot closer to
"plug and play".

steve


<Prev in Thread] Current Thread [Next in Thread>