& xfsTemplate,top=>1,side=>1 &>
The following are submissions from system administrators who are using XFS in a production environment. If you have a notable XFS installation you'd like to have added to the list, please email to sandeen at sgi.com.
"The Sloan Digital Sky Survey is an ambitious effort to map 1/4 of the visible sky at optical and very-near infrared wavelengths, and take spectra of 1 million extra-galactic objects. The estimated amount of data that will be taken over the 5 year lifespan of the project is 15TB, however, the total amount of storage space required for object informational databases, corrected frames, and reduced spectra will be several factors more than this. The goal is to have all the data online and available to the collaborators at all times, and provide this within strict budget constraints. To accomplish this goal we are using COTS machines with IDE disks configured as RAID50 arrays using XFS. Each machine has 16 81.9GB disks resulting in 1.12TB of usable space. At current hardware prices, these machines cost approximately $11,000 USD, or just under a penny per megabyte. Currently, 10 machines are in production and plans for 12TB more space is in the works."
"For complete details and status of the project please see http://www.sdss.org."
"Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for their main database server which collects the data from over 240 clothing retail stores throughout the Netherlands. Coltex depends on the availability of the server for over 100 hundred employees in the main office for retrieval of logistical and sales figures. The database size is roughly 10GB large containing both historical and current data."
"The entire production and logistical system depends on the availability of the system and downtime would mean a significant financial penalty. The speed and reliability of the XFS filesystem which has a proven track record and mature tools to go with it is fundamental to the availability of the system."
"XFS has saved us a lot of time during testing and implementation. A long filesystems check is no longer needed when bad things happen when they do. The increased speed of our database system which is based on Progress 9.1C is also a nice benefit to this filesystem."
"We're a 3D computer graphics/post-production house. We've currently got four fileservers using XFS under Linux online - three 350GB servers and one 800GB server. The servers are under fairly heavy load - network load to and from the dual NICs on the box is basically maxed out 18 hours a day - and we do have occasional lockups and drive failures. Thanks to Linux SW RAID5 and XFS, though, we haven't had any data loss, or significant down time."
"We have one 430GB RAID system with XFS in production storing corporate documents and another will go into production soon storing more scientific data."
"I'm currently in the process of slowly converting 21 clusters totaling 2300+ processors over to XFS."
"These machines are running a fairly stock RH7.1+XFS. The application is our own custom scheduler for doing genomic research. We have one of the worlds largest sequencing labs which generates a tremendous amount of raw data. Vast amounts of CPU cycles must be applied to it to turn it into useful data we can then sell access to."
"Currently, a minority of these machines are running XFS, but as I can get downtime on the clusters I am upgrading them to 7.1+XFS. When I'm done, it'll be about 10TB of XFS goodness... across 9G disks mostly."
"We've replaced our NetApp filer (80GB, $40,000). NetApp ONTAP software [runs on NetApp filers] is basically an NFS and CIFS server with their own proprietary filesystem. We were quickly running out of space and our annual budget almost depleted. What were we to do?"
"With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000 total). We were able to run Linux and Samba to emulate a NetApp filer."
"XFS allowed us to manage 300GB of data with absolutely no downtime (now going on 79 days) since implementation. Gone are the days of fearing the fsck of 300GB."
"I run the Center for Cytometry and Molecular Imaging at the Salk Institute in La Jolla, CA. We're a core facility for the Institute, offering flow cytometry, basic and deconvolution microscopy, phosphorimaging (radioactivity imaging) and fluorescent imaging."
"I'm currently in the process of migrating our data server to Linux/XFS. Our web server currently uses Linux/XFS. We have about 60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This is a bit restrictive for our microscopists so in order that they can put more data online, I'm adding another machine, also running Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec controllers...."
"Servers are configured with quota and run Samba, NFS, and Netatalk for connectivity to the mixed bag of computers we have around here. I use the CVS XFS tree most of the time. I have not seen any problems in the several months I have been testing."
"The Austin Museum of Art has two file servers (only 40 and 50 gig of software RAID respectively) running on RedHat 7.1 XFS. We have another file server that is using the ext2 acl patches for backwards compatibility which is 100 gig. About 50 users use the two XFS boxes pretty much all day long. Had one power failure, boxes were up for 10 minutes before the other box was done fscking."
"We use a production server with a 270 GB RAID 5 (hardware) disk array. It is based on a Suse 7.2 distribution, but with a standard 2.4.12 kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache, Exim, DHCP, POP3, MySQL. I have tried out different configurations with ReiserFS, but I didn't manage to find a stable configuration with respect to NFS. Since I converted all disks to XFS some 3 months ago, we never had any filesystem-related problems."
"Here at the IQ Group, Inc. we use XFS for all our production and development servers."
"Our OS of choice is Slackware Linux 8.0. Our hardware of choice is Dell and VALinux servers."
"As for applications, we run the standard Unix/Linux apps like Sendmail, Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia."
"We've been running XFS across the board for about 3 months now without a hitch (so far)."
"Size-wise, our biggest server is about 40 GB, but that will be increasing substantially in the near future."
"Our production servers are collocated so a journaled FS was a must. Reboots are quick and no human interaction is required like with a bad fsck on ext2. Additionally, our database servers gain additional integrity and robustness."
"We originally chose XFS over ReiserFS and ext3 because of it's age (it's been in production on SGI boxes for probably longer than all the other journalling FS's combined) and it's speed appeared comparable as well."
"Online Express Parcels Ltd. area new overnight parcel company based in the UK aimed at the quality end of the market."
"All our internal servers run Red Hat 7.1 with XFS on all filesystems."
"Using Red Hat and SGI's XFS allows us to purchase high quality but relatively inexpensive hardware and still get enterprise level performance and reliability."
"I've got XFS on a 'production' file server. The machine could have up to 500 people logged in, but typically less than 200. Most are Mac users, connected via NetAtalk for 'personal files', although there are shared areas for admin units. Probably about 30-40 windows users. (Samba) It's the file server for an Academic faculty at a University."
"Hardware RAID, via Mylex dual channel controller with 4 drives, Intel Tupelo MB, Intel 'SC5000' server chassis with redundant power and hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi drive."
"Only system 'crash' was caused by some one accidently unplugging it, just before we put it into production. In day to day use it has run well."
All testimonials on this page represent the views of the submitters, and references to other products and companies should not be construed as an endorsement by either the organizations profiled, or by SGI. All trademarks (r) their respective owners. <& xfsTemplate,bottom=>1 &>