xfs
[Top] [All Lists]

Re: advice: 3ware+raid+xfs

To: gbakos@xxxxxxxxxxxxxxx, linux-xfs@xxxxxxxxxxx
Subject: Re: advice: 3ware+raid+xfs
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Tue, 09 Dec 2003 14:40:52 +0100
In-reply-to: <Pine.SOL.4.58.0312071202250.20497@antu.cfa.harvard.edu>
Sender: linux-xfs-bounce@xxxxxxxxxxx
At 12:14 7-12-2003 -0500, Gaspar Bakos wrote:
Dear all,

I am about to build a system for data reduction, but before I do so, I
thought of posting this in case anyone has useful hints (don't do that!,
or be careful with...)

It would be a dual xeon mobo + 3ware Escalade card + 4x250Gb (WD) disk,

Good choice. I assume you mean the 4 port card.

running most probably RH9.0 and kernel 2.4.22-xfs. I haven't decided yet
about the arrangement of the 4x250Gb disks, but definitely there will be
XFS on them. My possibilities are: (I need total space more than 500Gb)
1. JBOD, each disk one partition
(drawback: I have to take care of not filling either of them)
2. RAID-0, one single 1Tb XFS partition
3. RAID-5

I would suggest using Raid 10 ( size = n/2) if you have have a enviroment with heavy writes. If you won't write to the fs too much and it's mostly reads you could use raid 5 (size = n-1).


If it's a production server I tend to waste money and opt for the raid 10 option instead since it's so much faster for database workloads and write heavy environments. The raid 10 performance is also a lot more consistent under load.

relatively big files (8Mb and 16Mb) accompanied with very small files
(<1kB). Recovery issues: I saw xfs_check run out of memory on a single
120Gb partition after an unexpected power failure. 3Ware configuration
issues that might be related to XFS, speed, efficiency.

I you read/write/modify lot's of small files create the fs with a larger log. mkfs.xfs -l size=32768b /dev/foo

Being an astronomer, I am not that experienced with sw/hw issues...
I was  always wondering when people write "we have been testing XFS with
60Tb filesystems" (and other magic numbers) - how they do that?

If it's 60TB I don't think it's a single filesytem under linux. The current limit is 2TB per device. AFAIK this is fixed in the upcoming 2.6


Cheers

--
Seth
I don't make sense, I don't pretend to either. Questions?


<Prev in Thread] Current Thread [Next in Thread>