|Subject:||Re: advice: 3ware+raid+xfs|
|From:||Seth Mos <knuffie@xxxxxxxxx>|
|Date:||Tue, 09 Dec 2003 14:40:52 +0100|
At 12:14 7-12-2003 -0500, Gaspar Bakos wrote:
running most probably RH9.0 and kernel 2.4.22-xfs. I haven't decided yet about the arrangement of the 4x250Gb disks, but definitely there will be XFS on them. My possibilities are: (I need total space more than 500Gb) 1. JBOD, each disk one partition (drawback: I have to take care of not filling either of them) 2. RAID-0, one single 1Tb XFS partition 3. RAID-5
I would suggest using Raid 10 ( size = n/2) if you have have a enviroment with heavy writes. If you won't write to the fs too much and it's mostly reads you could use raid 5 (size = n-1).
If it's a production server I tend to waste money and opt for the raid 10 option instead since it's so much faster for database workloads and write heavy environments. The raid 10 performance is also a lot more consistent under load.
relatively big files (8Mb and 16Mb) accompanied with very small files (<1kB). Recovery issues: I saw xfs_check run out of memory on a single 120Gb partition after an unexpected power failure. 3Ware configuration issues that might be related to XFS, speed, efficiency.
Being an astronomer, I am not that experienced with sw/hw issues... I was always wondering when people write "we have been testing XFS with 60Tb filesystems" (and other magic numbers) - how they do that?
If it's 60TB I don't think it's a single filesytem under linux. The current limit is 2TB per device. AFAIK this is fixed in the upcoming 2.6
-- Seth I don't make sense, I don't pretend to either. Questions?
|<Prev in Thread]||Current Thread||[Next in Thread>|