xfs
[Top] [All Lists]

Re: Software RAID, a bit OT

To: Ben Gollmer <ben@xxxxxxxxxxxx>
Subject: Re: Software RAID, a bit OT
From: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>
Date: Thu, 18 Jul 2002 15:59:31 +0200
>received: from mobile.sauter-bc.com (unknown [10.1.6.21]) by basel1.sauter-bc.com (Postfix) with ESMTP id 2C87157306; Thu, 18 Jul 2002 15:59:32 +0200 (CEST)
Cc: XFS <linux-xfs@xxxxxxxxxxx>
Organization: Sauter AG, Basel
References: <B95B1F8C.1E61%ben@xxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Ben Gollmer schrieb:
> 
> Hi storage gurus!
> 
> Hope you don't mind me asking this on-list, but I've gotten some very
> helpful storage-related info from here in the past. I'm putting together a
> server for a small group of developers; we have no real budget, so we're
> trying to keep things cheap. Here's our hardware specs so far:
> 
> 2x P3 700 MHz
> 512 MB RAM
> 2 Promise PCI ATA-133 controllers
> 3 Seagate 80 GB HDDs

I'm using Promise Ultra100TX2 controllers without any problem. I have
software RAID5 on them with XFS, external log is on a separate software
RAID1 on the same disks. I strongly recommend not using internal log
with XFS on RAID5 although it should have much improved now.

Simon

> 
> This server is going to handle file sharing, e-mail, CVS, and a bug-tracking
> database for us. Our project has some rather large files so we need a good
> amount of storage space. I was planning to software RAID 5 the HDDs together
> for a total of 160 GBs. I have been enjoying XFS on my workstation but I
> know it has had problems with software RAID 5 in the past. Are these
> problems fixed now?
> 
> We also considered trying to grab another 80 GB drive from somewhere and do
> a RAID 0+1 (still giving us 160 GBs storage) but I don't know if Linux
> software RAID handles this well.
> 
> Most of us run the -aa kernel tree on our workstations but I have no problem
> running SGI CVS kernels on the server if they are reasonably stable. Any
> input would be much appreciated :)
> 
> Ben



<Prev in Thread] Current Thread [Next in Thread>