<& xfsTemplate,top=>1,side=>1 &>
<!-- Start Project Content -->
<H1>Who's using XFS?</H1>
<P>
The following are submissions from system administrators who are
using XFS in a production environment. If you have a notable
XFS installation you'd like to have added to the list, please
send email to sandeen at sgi.com.
</P>
<H2><A HREF="http://www.sdss.org/">The Sloan Digital Sky Survey</A></H2>
<P>
"The Sloan Digital Sky Survey is an ambitious effort to map one-quarter of
the sky at optical and very-near infrared wavelengths and take spectra of 1
million extra-galactic objects. The estimated amount of data that will be
acquired over the 5 year lifespan of the project is 15TB, however, the total
amount of storage space required for object informational databases,
corrected frames, and reduced spectra will be several factors more than
this. The goal is to have all the data online and available to the
collaborators at all times. To accomplish this goal we are using commodity,
off the shelf (COTS) Intel servers with EIDE disks configured as RAID50
arrays using XFS. Each machine has 16 81.9GB disks resulting in 1.12TB of
usable space. Currently, 10 machines are in production and plans for 12TB
more space is in the works."
</P>
<P>
"For complete details and status of the project please see
<A HREF="http://www.sdss.org/">http://www.sdss.org</A>.
For details of the storage systems, see the
<a href="http://home.fnal.gov/~yocum/storageServerTechnicalNote.html">SDSS
Storage Server Technical Note</a>."
</P>
<H2><A HREF="http://www-d0.fnal.gov/">
The D0 Experiment at Fermilab</A></H2>
<P>
"At the D0 experiment at the Fermi National Accelerator Laboratory we have
a ~150 node cluster of desktop machines all using the SGI-patched kernel.
Every large disk (>40Gb) or disk array in the cluster uses XFS including
4x640Gb disk servers and several 60-120Gb disks/arrays. Originally we
chose reiserfs as our journalling filesystem, however, this was a
disaster. We need to export these disks via NFS and this seemed
perpetually broken in 2.4 series kernels. We switched to XFS and have been
very happy. The only inconvenience is that it is not included in the
standard kernel. The SGI guys are very prompt in their support of new
kernels, but it is still an extra step which should not be necessary."
</P>
<H2><A HREF="http://coltex.nl/">Coltex Retail Group BV</A></H2>
<P>
"Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for
their main database server which collects the data from over 240 clothing
retail stores throughout the Netherlands. Coltex depends on the availability
of the server for over 100 hundred employees in the main office for
retrieval of logistical and sales figures. The database size is roughly
10GB large containing both historical and current data."
</P>
<P>
"The entire production and logistical system depends on the availability of
the system and downtime would mean a significant financial penalty. The
speed and reliability of the XFS filesystem which has a proven track record
and mature tools to go with it is fundamental to the availability of the
system."
</P>
<P>
"XFS has saved us a lot of time during testing and implementation. A long
filesystems check is no longer needed when bad things happen when they do.
The increased speed of our database system which is based on Progress 9.1C
is also a nice benefit to this filesystem."
</P>
<H2><A HREF="http://www.dkp.com/">DKP Effects</A></H2>
<P>
"We're a 3D computer graphics/post-production house. We've
currently got four fileservers using XFS under Linux online -
three 350GB servers and one 800GB server. The servers are under
fairly heavy load - network load to and from the dual NICs on
the box is basically maxed out 18 hours a day - and we do have
occasional lockups and drive failures. Thanks to Linux SW RAID5
and XFS, though, we haven't had any data loss, or significant
down time."
</P>
<H2><A HREF="http://www.epigenomics.com/">Epigenomics</A></H2>
<P>
"We have one 430GB RAID system with XFS in production storing
corporate documents and another will go into production soon
storing more scientific data."
</P>
<H2><A HREF="http://www.incyte.com/">Incyte Genomics</A></H2>
<P>
"I'm currently in the process of slowly converting 21 clusters
totaling 2300+ processors over to XFS."
</P>
<P>
"These machines are running a fairly stock RH7.1+XFS. The
application is our own custom scheduler for doing genomic
research. We have one of the worlds largest sequencing
labs which generates a tremendous amount of raw data. Vast
amounts of CPU cycles must be applied to it to turn it
into useful data we can then sell access to."
</P>
<P>
"Currently, a minority of these machines are running XFS,
but as I can get downtime on the clusters I am upgrading
them to 7.1+XFS. When I'm done, it'll be about 10TB of XFS
goodness... across 9G disks mostly."
</P>
<H2><A HREF="http://www.monmouth.edu/">Monmouth University</A></H2>
<P>"We've replaced our NetApp filer (80GB, $40,000). NetApp ONTAP software
[runs on NetApp filers] is basically an NFS and CIFS server with their
own proprietary filesystem. We were quickly running out of space and
our annual budget almost depleted. What were we to do?"
</P>
<P>
"With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000
total). We were able to run Linux and Samba to emulate a NetApp filer."
</P>
<P>
"XFS allowed us to manage 300GB of data with absolutely no downtime (now
going on 79 days) since implementation. Gone are the days of fearing
the fsck of 300GB."
</P>
<H2><A HREF="http://pingu.salk.edu/">
Center for Cytometry and Molecular Imaging at the Salk Institute</A></H2>
<P>
"I run the Center for Cytometry and Molecular Imaging at the Salk
Institute in La Jolla, CA. We're a core facility for the Institute,
offering flow cytometry, basic and deconvolution microscopy,
phosphorimaging (radioactivity imaging) and fluorescent imaging."
</P>
<P>
"I'm currently in the process of migrating our data server to
Linux/XFS. Our web server currently uses Linux/XFS. We have about
60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This
is a bit restrictive for our microscopists so in order that they
can put more data online, I'm adding another machine, also running
Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec
controllers...."
</P>
<P>
"Servers are configured with quota and run Samba, NFS, and Netatalk
for connectivity to the mixed bag of computers we have around here.
I use the CVS XFS tree most of the time. I have not seen any
problems in the several months I have been testing."
</P>
<H2><A HREF="http://www.amoa.org/">The Austin Museum of Art</A></H2>
<P>
"The Austin Museum of Art has two file servers (only 40 and 50 gig of
software RAID respectively) running on RedHat 7.1 XFS. We have another
file server that is using the ext2 acl patches for backwards
compatibility which is 100 gig. About 50 users use the two XFS boxes
pretty much all day long. Had one power failure, boxes were up for 10
minutes before the other box was done fscking."
</P>
<H2><A HREF="http://www.tecmath.com/">tecmath AG</A></H2>
<P>
"We use a production server with a 270 GB RAID 5 (hardware) disk array.
It is based on a Suse 7.2 distribution, but with a standard 2.4.12
kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients
as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache,
Exim, DHCP, POP3, MySQL. I have tried out different configurations
with ReiserFS, but I didn't manage to find a stable configuration with
respect to NFS. Since I converted all disks to XFS some 3 months ago,
we never had any filesystem-related problems."
</P>
<H2><A HREF="http://www.theiqgroup.com/">The IQ group</A></H2>
<P>
"Here at the IQ Group, Inc. we use XFS for all our production and
development servers."
</P>
<P>
"Our OS of choice is Slackware Linux 8.0. Our hardware of choice is
Dell and VALinux servers."
</P>
<P>
"As for applications, we run the standard Unix/Linux apps like Sendmail,
Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia."
</P>
<P>
"We've been running XFS across the board for about 3 months now without a
hitch (so far)."
</P>
<P>
"Size-wise, our biggest server is about 40 GB, but that will be
increasing substantially in the near future."
</P>
<P>
"Our production servers are collocated so a journaled FS was a must.
Reboots are quick and no human interaction is required like with a bad
fsck on ext2. Additionally, our database servers gain additional
integrity and robustness."
</P>
<P>
"We originally chose XFS over ReiserFS and ext3 because of it's age (it's
been in production on SGI boxes for probably longer than all the other
journalling FS's combined) and it's speed appeared comparable as well."
</P>
<H2><A HREF="http://www.onlineexpressparcels.com">
Online Express Parcels Ltd.</A></H2>
<P>
"Online Express Parcels Ltd. is a new overnight parcel company based
in the UK aimed at the quality end of the market."
</P>
<P>
"All our internal servers run Red Hat 7.1 with XFS on all filesystems."
</P>
<P>
"Using Red Hat and SGI's XFS allows us to purchase high quality but
relatively inexpensive hardware and still get enterprise level
performance and reliability."
</P>
<H2><A HREF="http://www.artsit.usyd.edu.au">
Arts IT Unit, Sydney University</A></H2>
<P>
"I've got XFS on a 'production' file server. The machine could have up to
500 people logged in, but typically less than 200. Most are Mac users,
connected via NetAtalk for 'personal files', although there are shared
areas for admin units. Probably about 30-40 windows users. (Samba)
It's the file server for an Academic faculty at a University."
</P>
<P>
"Hardware RAID, via Mylex dual channel controller with 4 drives, Intel
Tupelo MB, Intel 'SC5000' server chassis with redundant power and
hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi
drive."
</P>
<P>
"Only system 'crash' was caused by some one accidently unplugging it,
just before we put it into production. It was back in full operation
within 5 minutes. Without journaling, the fsck would have taken well
over an hour. In day to day use it has run well."
</P>
<FONT SIZE=-1>
<I>All testimonials on this page represent the views of the submitters,
and references to other products and companies should not be construed
as an endorsement by either the organizations profiled, or by SGI.
All trademarks (r) their respective owners.</I>
</FONT>
<& xfsTemplate,bottom=>1 &>