xfs
[Top] [All Lists]

Re: Tera-Byte+ fileservers

To: pac@xxxxxxxxxxxxxx
Subject: Re: Tera-Byte+ fileservers
From: Ragnar Kjørstad <xfs@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 24 Aug 2001 21:35:54 +0200
Cc: linux-xfs@xxxxxxxxxxx, mike@xxxxxxxxxxxxxx
In-reply-to: <20010824110841.A22894@xxxxxxxxxxx>; from pac@xxxxxxxxxxxxxx on Fri, Aug 24, 2001 at 11:08:41AM -0500
References: <20010824110841.A22894@xxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
User-agent: Mutt/1.2.5i
On Fri, Aug 24, 2001 at 11:08:41AM -0500, pac@xxxxxxxxxxxxxx wrote:
>  Anyone know what makes a good inexpensive TeraByte fileserver?
>  Specifically looking for something between 1TB and 100TB.
>  This would run XFS for sure.

We don't usually advertise on mailingslists, but you asked for it, so...
Yes, Big Storage makes good inexpensive TeraByte fileservers.


> I'd like to put together several systems, in both IDE and SCSI version.
 
> 1. Is there a decent IDE-raid card that can support over a Terabyte?
>    Is there such a thing as a hot-swappable IDE raid card?
>    Whats the fastest thru-put I can expect out of an IDE raid setup?
>    Does an IDE TB server even make sense?

I'm not sure what you mean by IDE-raid card? 
You obviously can't use PCI RAID-controllers for a setup this big,
because you can't fit this many disks close enough to the server and
because it's a pain to manage.

You should go either with SCSI-SCSI, or IDE-SCSI RAIDs - it's primerely a
price vs performance issue. (or FiberChanel)

> 2. Is SCA the way to go for scsi? What cards, backplanes, drives
>    can you recommend?
>    Whats the fastest thru-put I can expect out of an SCSI raid setup?

We've messured 120 MB/s sequential writes (using bonnie++ over XFS on
linux) over a single SCSI-channel. If you're after awsome performance
you should use multiple RAIDs connected on seperate SCSI-channels. Your
bottleneck will be the PCI-bus on your server. You can get better
performance on a non x86 system, but if you care about cost and don't
need all your 100TB on a single server you get far more for your money
by splitting it up on multiple x86 boxes.

> 3. Whats the fastest throughput i can get away with? Gigabit ether?
>    Is there another interface that makes more sense?
One or more gigabit sounds like a good solution.

As others have alreaddy responded there is a 1 or 2 TB devicesize limit
in the linux kernel, depending on what drivers you use. There is work
beeing done to fix this, and we've successfully created > 2TB
filesystems on our hardware - but it's not likely to be in the standard
kernel until 2.5/2.6.

It's hard to give you any good advice as you didn't really say much
about what you're going to use your system for. Please email us back off
the list with more info.



-- 
Ragnar Kjorstad
Big Storage


<Prev in Thread] Current Thread [Next in Thread>