I (well, OK, the lab) will soon be the proud papa of a big IDE-SCSI RAID
array. Basically, it's a tower full of IDE disks, a RAID controller, and
a protocol adapter that makes it look like U2W SCSI to the host. Fill it
with 8 80GB disks in a RAID5, and I'm looking at a 560GB partition (for
$6K, btw). That, in my opinion excludes ext2 -- I'd rather not fsck
560GB, thank you very much. I'll be exporting this via NFS over a full
duplex 100Mbit network to the users, and that seems to preclude ReiserFS.
Which brings me to xfs (which I also like, having cut my teeth on some SGI
boxen).
Which, finally, brings me to my question. What is the current
conventional wisdom on the best way to get xfs up and running? I'll
obviously be testing before I put this into production. Should I be
grabbing the latest CVS snapshots, or using the 0.9 pre-release? I'll
probably be running on RH7.0 (as it seems a bit harder to wedge 2.4.x onto
6.2) with the plan of moving to 7.1 (along with the rest of my network) a
short while after it comes out. Any and all input is much appreciated.
Thank you, and thanks for bringing xfs to Linux.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
|