[BACK]Return to users.html CVS log [TXT][DIR] Up to [Development] / xfs-website

File: [Development] / xfs-website / users.html (download) (as text)

Revision 1.1, Mon Nov 1 09:05:19 2004 UTC (12 years, 11 months ago) by nathans
Branch: MAIN

xfs_users.html 1.14 Renamed to users.html
Rename xfs_users.html to users.html for consistency with everything else.

<& xfsTemplate,top=>1,side=>1 &>

<!-- Start Project Content -->

<H1>Who's using XFS?</H1>

<A NAME="distros">
<H2>Linux Distributions shipping XFS</H2>
<P>
The following Linux distributions ship with XFS:
</P>
<table border=0 cellspacing=2 width="60%">
<tr>
<td valign=top bgcolor="#88ee88">
<b>Distribution:</b></td>
<td valign=top bgcolor="#99cccc">
<b>Support since:</b></td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.mandrakesoft.com">Mandrake Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 8.1</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.suse.com">SuSE Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 8.0</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.gentoo.org">Gentoo Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 1.0</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.slackware.org">Slackware Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 8.1</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.jblinux.net">JB Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 2.0</td>
</tr>
</table>


<H2>Major Installations & Products using XFS</H2>
<P>
The following are submissions from system administrators who are 
using XFS in a production environment - click to jump to more information.
If you have a notable
XFS installation you'd like to have added to the list, please
send email to sandeen at sgi.com.
</P>

<H3>Educational & Research Institutions</H3>
<UL>
	<LI><a href="#Center for Cytometry and Molecular Imaging at the Salk Institute">Center for Cytometry and Molecular Imaging at the Salk Institute</A>
	<LI><a href="#The D0 Experiment at Fermilab">The D0 Experiment at Fermilab</A>
	<LI><a href="#The Sloan Digital Sky Survey">The Sloan Digital Sky Survey</A>
	<LI><a href="#Monmouth University">Monmouth University</A>
	<LI><a href="#The University of Wisconsin Astronomy Department">The University of Wisconsin Astronomy Department</A>
	<LI><a href="#Arts IT Unit, Sydney University">Arts IT Unit, Sydney University</A>
	<LI><a href="#Vanderbilt University Center for Structural Biology">Vanderbilt University Center for Structural Biology</A>
	<LI><a href="#CDF Experiment at Fermi National Lab">CDF Experiment at Fermi National Lab</A>
</UL>
 
<H3>Products shipping with XFS</H3>
<UL>
	<LI><a href="#Ciprico DiMeda NAS Solutions">Ciprico DiMeda NAS Solutions</A>
	<LI><a href="#The Quantum Guardian 14000">The Quantum Guardian&#153 14000</A>
	<LI><a href="#BigStorage K2~NAS">BigStorage K2~NAS</A>
	<LI><a href="#Echostar DishPVR 721">Echostar DishPVR 721</A>
	<LI><a href="#Sun Cobalt RaQ 550">Sun Cobalt RaQ&#153 550</A>
</UL>

<H3>Commercial Installations</H3>
<UL>
	<LI><a href="#Coltex Retail Group BV">Coltex Retail Group BV</A>
	<LI><a href="#DKP Effects">DKP Effects</A>
	<LI><a href="#Epigenomics">Epigenomics</A>
	<LI><a href="#Incyte Genomics">Incyte Genomics</A>
	<LI><a href="#The Austin Museum of Art">The Austin Museum of Art</A>
	<LI><a href="#tecmath AG">tecmath AG</A>
	<LI><a href="#Get2Chip, Inc.">Get2Chip, Inc.</A>
	<LI><a href="#Lando International Group Technologies">Lando International Group Technologies</A>
	<LI><a href="#The IQ Group">The IQ Group</A>
	<LI><a href="#Foote, Cone, & Belding">Foote, Cone, & Belding</A>
	<LI><a href="#Moving Picture Company">Moving Picture Company</A>
	<LI><a href="#Coremetrics, Inc.">Coremetrics, Inc.</A>
	<LI><a href="#Evolt.org">Evolt.org</A>
</UL>	


<A NAME="The Sloan Digital Sky Survey">
<H2><A HREF="http://www.sdss.org/">The Sloan Digital Sky Survey</A></H2>
<P>
"The Sloan Digital Sky Survey is an ambitious effort to map one-quarter of
the sky at optical and very-near infrared wavelengths and take spectra of 1
million extra-galactic objects. The estimated amount of data that will be
acquired over the 5 year lifespan of the project is 15TB, however, the total
amount of storage space required for object informational databases,
corrected frames, and reduced spectra will be several factors more than
this. The goal is to have all the data online and available to the
collaborators at all times. To accomplish this goal we are using commodity,
off the shelf (COTS) Intel servers with EIDE disks configured as RAID50 arrays 
using XFS. Currently, 14 machines are in production accounting for over 18TB.  
By the scheduled end of the survey in 2005, 50TB of XFS disks will be online 
serving SDSS data to collaborators and the public."
</P>
<P>
"For complete details and status of the project please see
<A HREF="http://www.sdss.org/">http://www.sdss.org</A>.
For details of the storage systems, see the 
<a href="http://home.fnal.gov/~yocum/storageServerTechnicalNote.html">SDSS
Storage Server Technical Note</a>."
</P>


<A NAME="The D0 Experiment at Fermilab">
<H2><A HREF="http://www-d0.fnal.gov/">
	The D0 Experiment at Fermilab</A></H2>
<P>
"At the D0 experiment at the Fermi National Accelerator Laboratory we have
a ~150 node cluster of desktop machines all using the SGI-patched kernel.
Every large disk (>40Gb) or disk array in the cluster uses XFS including
4x640Gb disk servers and several 60-120Gb disks/arrays. Originally we
chose reiserfs as our journalling filesystem, however, this was a
disaster. We need to export these disks via NFS and this seemed
perpetually broken in 2.4 series kernels. We switched to XFS and have been
very happy. The only inconvenience is that it is not included in the
standard kernel. The SGI guys are very prompt in their support of new
kernels, but it is still an extra step which should not be necessary."
</P>

<A NAME="Ciprico DiMeda NAS Solutions">
<H2><A HREF="http://www.ciprico.com/pDiMeda.shtml">
Ciprico DiMeda NAS Solutions</A></H2>
<P>
"The Ciprico DiMeda line of Network Attached Storage solutions combine the 
ease of connectivity of NAS with the SAN like performance levels required 
for digital media applications.   The DiMeda 3600 provides high availability 
and high performance through  dual NAS servers and redundant, scalable 
Fibre Channel RAID storage.  The DiMeda 1700 provides high performance 
files services at a low price by using the latest Serial ATA RAID 
technology.  All DiMeda systems are based on Linux and use XFS as the 
filesystem. We tested a number of filesystem alternatives and XFS was 
chosen because it provided the highest performance in digital media 
applications and the journaling feature ensures rapid failover in our 
dual node fault tolerant configurations."
</P>

<A NAME="The Quantum Guardian 14000">
<H2><A HREF="http://www.quantum.com/Products/NAS+Servers/Guardian+14000/Default.htm">
	The Quantum Guardian&#153 14000</A></H2>
<P>
"The Quantum Guardian&#153 14000, the latest Network Attached Storage (NAS) solution
from Quantum, delivers 1.4TB of enterprise-class storage for less than
$25,000.  The Guardian 14000 is a Linux-based device which utilizes XFS
to provide a highly reliable journaling filesystem with simultaneous
support for Windows, UNIX, Linux and Macintosh environments.  As
dedicated appliance optimized for fast, reliable file sharing, the
Guardian 14000 combines the simplicity of NAS with a robust feature set
designed for the most demanding enterprise environments.  Support for
tools such as Active Directory Service (ADS), UNIX Network Information
Service (NIS) and Simple Network Management Protocol (SNMP) provides
ease of management and seamless integration.  Hardware redundancy,
Snapshots and StorageCare&#153 on-site service ensure security for
business-critical data."
</P>

<A NAME="BigStorage K2~NAS">
<H2><A HREF="http://www.bigstorage.com/products_approach_overview.html">
	BigStorage K2~NAS</A></H2>
<P>
"At BigStorage we pride ourselves on tailoring our NAS systems to meet our
customer's needs, with the help of XFS we are able to provide them with the
most reliable Journaling Filesystem technology available.  Our open systems
approach, which allows for cross-platform integration, gives our customers
the flexibility to grow with their data requirements. In addition,
BigStorage offers a variety of other features including total hardware
redundancy, snapshotting, replication and backups directly from the unit.
All of our products include BigStorage’s 24/7 LiveResponse&#153 support.  With
LiveResponse&#153, we keep our team of experienced technical experts on call 24
hours a day, every day, to ensure that your storage investment remains
online, all the time."
</P>

<A NAME="Echostar DishPVR 721">
<H2><A HREF="http://www.echostar.com">
	Echostar DishPVR 721</A></H2>
<P>
"Echostar uses the XFS filesystem for its latest generation of satellite receivers, 
the DP721. Echostar chose XFS for its performance, stability and unique set of features."
</P>
<P>
"XFS allowed us to meet a demanding requirement of recording two mpeg2 streams to the 
internal hard drive while simultaneously viewing a third pre-recorded stream. 
In addition, XFS allowed us to withstand unexpected power loss without filesystem 
corruption or user interaction."
</P>
<P>
"We tested several other filesytems, but XFS emerged as the clear winner."
</P>

<A NAME="Sun Cobalt RaQ 550">
<H2><A HREF="http://www.sun.com/hardware/serverappliances/raq550/">
	Sun Cobalt RaQ&#153 550</A></H2>
<P>
From the <A HREF="http://www.sun.com/hardware/serverappliances/raq550/features.html">features</A> page:
</P>
<P>
"XFS is a journaling file system capable of quick fail over recovery after 
unexpected interruptions.  XFS is an important feature for mission-critical applications as it ensures 
data integrity and dramatically reduces startup time by avoiding FSCK delay."
</P>


<A NAME="Center for Cytometry and Molecular Imaging at the Salk Institute">
<H2><A HREF="http://pingu.salk.edu/">
	Center for Cytometry and Molecular Imaging at the Salk Institute</A></H2>
<P>
"I run the Center for Cytometry and Molecular Imaging at the Salk
Institute in La Jolla, CA.  We're a core facility for the Institute,
offering flow cytometry, basic and deconvolution microscopy,
phosphorimaging (radioactivity imaging) and fluorescent imaging."
</P>
<P>
"I'm currently in the process of migrating our data server to
Linux/XFS. Our web server currently uses Linux/XFS. We have about
60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This
is a bit restrictive for our microscopists so in order that they
can put more data online, I'm adding another machine, also running
Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec
controllers...."
</P>
<P>
"Servers are configured with quota and run Samba, NFS, and Netatalk
for connectivity to the mixed bag of computers we have around here.
I use the CVS XFS tree most of the time.  I have not seen any
problems in the several months I have been testing."
</P>


<A NAME="Coltex Retail Group BV">
<H2><A HREF="http://coltex.nl/">Coltex Retail Group BV</A></H2>
<P>
"Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for
their main database server which collects the data from over 240 clothing
retail stores throughout the Netherlands. Coltex depends on the availability
of the server for over 100 hundred employees in the main office for
retrieval of logistical and sales figures. The database size is roughly
10GB large containing both historical and current data."
</P>
<P>
"The entire production and logistical system depends on the availability of
the system and downtime would mean a significant financial penalty. The
speed and reliability of the XFS filesystem which has a proven track record
and mature tools to go with it is fundamental to the availability of the
system."
</P>
<P>
"XFS has saved us a lot of time during testing and implementation. A long
filesystems check is no longer needed when bad things happen when they do.
The increased speed of our database system which is based on Progress 9.1C
is also a nice benefit to this filesystem."
</P>

<A NAME="DKP Effects">
<H2><A HREF="http://www.dkp.com/">DKP Effects</A></H2>
<P>
"We're a 3D computer graphics/post-production house.  We've
currently got four fileservers using XFS under Linux online -
three 350GB servers and one 800GB server.  The servers are under
fairly heavy load - network load to and from the dual NICs on
the box is basically maxed out 18 hours a day - and we do have
occasional lockups and drive failures.  Thanks to Linux SW RAID5
and XFS, though, we haven't had any data loss, or significant
down time."
</P>


<A NAME="Epigenomics">
<H2><A HREF="http://www.epigenomics.com/">Epigenomics</A></H2>
<P>
"We currently have several IDE-to-SCSI-RAID systems with XFS in
production. The largest has a capacity of 1.5TB, the other 2 have
430GB each."
</P>
<P>
"Data stored on these filesystems is on the one hand "normal" home
directories and corporate documents and on the other hand scientific
data for our laboratory and IT department."
</P>


<A NAME="Incyte Genomics">
<H2><A HREF="http://www.incyte.com/">Incyte Genomics</A></H2>
<P>
"I'm currently in the process of slowly converting 21 clusters
totaling 2300+ processors over to XFS."
</P>
<P>
"These machines are running a fairly stock RH7.1+XFS. The
application is our own custom scheduler for doing genomic
research. We have one of the worlds largest sequencing
labs which generates a tremendous amount of raw data. Vast
amounts of CPU cycles must be applied to it to turn it
into useful data we can then sell access to."
</P>
<P>
"Currently, a minority of these machines are running XFS,
but as I can get downtime on the clusters I am upgrading
them to 7.1+XFS. When I'm done, it'll be about 10TB of XFS
goodness... across 9G disks mostly."
</P>


<A NAME="Monmouth University">
<H2><A HREF="http://www.monmouth.edu/">Monmouth University</A></H2>
<P>"We've replaced our NetApp filer (80GB, $40,000).  NetApp ONTAP software
[runs on NetApp filers] is basically an NFS and CIFS server with their
own proprietary filesystem.  We were quickly running out of space and
our annual budget almost depleted.  What were we to do?"
</P>
<P>
"With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000
total).  We were able to run Linux and Samba to emulate a NetApp filer."
</P>
<P>
"XFS allowed us to manage 300GB of data with absolutely no downtime (now
going on 79 days) since implementation.  Gone are the days of fearing
the fsck of 300GB."
</P>


<A NAME="The University of Wisconsin Astronomy Department">
<H2><A HREF="http://www.astro.wisc.edu">
	The University of Wisconsin Astronomy Department</A></H2>
<P>
"At the University of Wisconsin Astronomy Department
we have been using Linux XFS since the first release.  We currently have 31 Linux
boxes running XFS on all filesystems with about 2.6 TB of disk space on these
machines.  We use XFS primarily on our data reduction systems,  but we also
use it on our web server and on one of the remote observing machines at the
WIYN 3.5m Telescope at Kitt Peak  
(<A HREF="http://www.noao.edu/wiyn/wiyn.html">http://www.noao.edu/wiyn/wiyn.html</A>)."
</P>
<P>
"We will likely be using Linux XFS at least in part on the GLIMPSE program
(<A HREF="http://www.astro.wisc.edu/sirtf/">http://www.astro.wisc.edu/sirtf/</A>)
  which will likely require several TB of disk space to process the data."
</P>


<A NAME="The Austin Museum of Art">
<H2><A HREF="http://www.amoa.org/">The Austin Museum of Art</A></H2>
<P>
"The Austin Museum of Art has two file servers running RedHat 7.2_XFS
upgraded from RedHat 7.1_XFS.  Our webserver runs Domino on top of
RedHat 7.3_XFS and we're getting about 70% better performance than the
Domino server running on Windows 2000 Server.  We're moving our
workstations away from Windows and Microsoft Office to an LTSP server
running on RedHat 7.3_XFS.
</P>
<P>
"We've become solely dependent on XFS for all of our data systems."
</P>


<A NAME="tecmath AG">
<H2><A HREF="http://www.tecmath.com/">tecmath AG</A></H2>
<P>
"We use a production server with a 270 GB RAID 5 (hardware) disk array.
It is based on a Suse 7.2 distribution, but with a standard 2.4.12
kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients
as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache,
Exim, DHCP, POP3, MySQL. I have tried out different configurations
with ReiserFS, but I didn't manage to find a stable configuration with
respect to NFS. Since I converted all disks to XFS some 3 months ago,
we never had any filesystem-related problems."
</P>


<A NAME="The IQ Group">
<H2><A HREF="http://www.theiqgroup.com/">The IQ Group</A></H2>
<P>
"Here at the IQ Group, Inc. we use XFS for all our production and
development servers."
</P>
<P>
"Our OS of choice is Slackware Linux 8.0.  Our hardware of choice is 
Dell and VALinux servers."
</P>
<P>
"As for applications, we run the standard Unix/Linux apps like Sendmail,
Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia."
</P>
<P>
"We've been running XFS across the board for about 3 months now without a
hitch (so far)."
</P>
<P>
"Size-wise, our biggest server is about 40 GB, but that will be
increasing substantially in the near future."
</P>
<P>
"Our production servers are collocated so a journaled FS was a must.
Reboots are quick and no human interaction is required like with a bad
fsck on ext2.  Additionally, our database servers gain additional
integrity and robustness."
</P>
<P>
"We originally chose XFS over ReiserFS and ext3 because of it's age (it's
been in production on SGI boxes for probably longer than all the other
journalling FS's combined) and it's speed appeared comparable as well."
</P>

<A NAME="Arts IT Unit, Sydney University">
<H2><A HREF="http://www.artsit.usyd.edu.au">
	Arts IT Unit, Sydney University</A></H2>
<P>
"I've got XFS on a 'production' file server. The machine could have up to
500 people logged in, but typically less than 200. Most are Mac users,
connected via NetAtalk for 'personal files', although there are shared
areas for admin units. Probably about 30-40 windows users. (Samba)
It's the file server for an Academic faculty at a University."
</P>
<P>
"Hardware RAID, via Mylex dual channel controller with 4 drives, Intel
Tupelo MB, Intel 'SC5000' server chassis with redundant power and
hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi
drive."
</P>
<P>
"Only system 'crash' was caused by some one accidently unplugging it,
just before we put it into production. It was back in full operation
within 5 minutes.  Without journaling, the fsck would have taken well
over an hour.  In day to day use it has run well."
</P>


<A NAME="Vanderbilt University Center for Structural Biology">
<H2><A HREF="http://structbio.vanderbilt.edu/comp/">
	Vanderbilt University Center for Structural Biology</A></H2>
<P>
"I run a high-performance computing center for Structural Biology research 
at Vanderbilt University.  We use XFS extensively, and have been since the 
late prerelease versions.  I've had nothing but good experiences with it."
</P>
<P>
"We began using XFS in our search for a good solution for our RAID
fileservers.  We had such good experiences with it on these sysytems that
we've begun putting it on the root/usr/var partitions of every Linux
system we run here. I even have it on my laptop these days. XFS in
combination with the 2.4 NFS3 implementation performs very well for us,
and we have great uptimes on these systems (Our 750GB ArenaII setup is at
143 days right now)."
</P>
<P>
"All told, we've got about 1.2TB of XFS filesystems spinning right now.  
It's spread out across maybe a dozen or so filesystems and will 
continue to increase as we are growing fast and that's all we use now.  
Next up is putting it on our 17-node Linux cluster, which will bring that 
up to 1.5TB spread across 30 filesystems."
</P>
<P>
"I, for one, would LOVE to see XFS make it into the kernel tree.  From my
perspectives, it's one of the best things to happen to Linux in the 7
years I've been using/administering it."
</P>


<A NAME="CDF Experiment at Fermi National Lab">
<H2><A HREF="http://www-cdf.fnal.gov/">
	CDF Experiment at Fermi National Lab</A></H2>
<P>
CDF, an elementary particle physics experiment at Fermi National Lab,
is using xfs for all our cache disks.
</P>
<P>
The usage model is that we have a PB tape archive (2 STK silos)
as permanent storage. In front of this archive we are deploying
a roughly 100TB disk cache system. The cache is made up of 50 2TB
file server based on cheap commodity hardware (3ware based hardware raid
using IDE drives). The data is then processed by a cluster of 300 Dual CPU
Linux PC's. The cache software is dCache, a DESY/FNAL product.
</P>
<P>
The whole system is used by more than 300 active users from all over the
world for batch processing for their physics data analysis.
</P>

<A NAME="Get2Chip, Inc.">
<H2><A HREF="http://www.get2chip.com">
	Get2Chip, Inc.</A></H2>
<P>
"We are using XFS on 3 production file servers with approximately 1.5T of
data.  Quite impressive especially when we had a power outage and all
three servers shutdown. All servers came back up in minutes with no
problems!  We are looking at creating two more servers that would manage
2+ TB of data store."
</P>


<A NAME="Lando International Group Technologies">
<H2><A HREF="http://www.lando.co.za">
	Lando International Group Technologies</A></H2>
<P>
Lando International Group Technologies is the home of:
<UL>
<LI><a href="www.lando.co.za">Lanndo Technologies Africa (Pty) Ltd</a> - Internet Service Provider 
<LI><a href="www.lbsd.net">Linux Based Systems Design</a> (Article 21). Not-For-Profit
    company established to provide free Linux distributions and programs.
<LI>Cell Park South Africa (Pty) Ltd.  RSA Pat Appln 2001/10406. Collecting 
    parking fees by means of cell phone SMS or voice.  
<LI>Read Plus Education (Pty) Ltd.  Software based reading skills training 
    and testing for ages 4 to 100.
<LI>Mobivan. Mobile office including Internet access, fax, copying, 
    printing, telephone, collection and delivery services, legal
    services, pre-paid phone and electricity services, bill payment email,
    secretarial services, training facilities and management services.
<LI>Lando International Marketing Agency. Direct marketing services, design 
    and supply of promotional material, consulting, sourcing of capital and 
    other funding.
<LI>Illico.  Software development and systems analyses on most platforms.  
</UL>
<P>
"Throughout these companies, we use the XFS filesystem with 
<a href="http://idms.lbsd.net">IDMS Linux</a> on high-end Intel servers, with an average of 
100 GB storage each.  XFS stores our customer and user data, 
including credit card details, mail, routing tables, etc.. We have not had one problem 
since the release of the first XFS patch."
<P>


<A NAME="Foote, Cone, & Belding">
<H2><A HREF="http://www.fcb-wilkens.com">
	Foote, Cone, & Belding</A></H2>
<P>
"We are an advertisment company in germany, and the use of the xfs file system
is a story of success for us. In our hamburg office, we have two file servers
having a 420 Gig RAID in xfs format serving (almost) all our data to about 180
Macs and about 30  PCs using  samba and netatalk. Some of the data is used in
our offices in Frankfurt and Berlin, and in fact the Berlin office is just
getting it's own 250 Gig fileserver (using xfs) right now."
</P>
<P>
"The general success with xfs has led us to switch over all our linux servers
to run on xfs as well (with the exception of two systems that are tied to
tight specifications configuration wise). XFS, even the old 1.0 version, has
happily taken on various abuse - broken scsi controllers, broken RAID systems."
</P>


<A NAME="Moving Picture Company">
<H2><A HREF="http://www.moving-picture.co.uk/">
	Moving Picture Company</A></H2>
<P>
"We here at MPC use XFS/RedHat 7.2 on all of our graphics-workstations 
and file-servers. More info can be found in an 
<A HREF="http://www.linuxuser.co.uk/articles/issue20/lu20-Linux_at_work-In_the_picture.pdf">
article</A> LinuxUser magazine did on us recently."
</P>


<A NAME="Coremetrics, Inc.">
<H2><A HREF="http://www.coremetrics.com/">
	Coremetrics, Inc.</A></H2>
<P>
"We are currently using XFS for 25+ production web-servers, ~900GB Oracle
db servers, with potentially 15+ more servers by mid 2003, with ~900GB+
databases. All XFS installed."
</P>
<P>
"Also, our dev environment, except for the Sun boxes which all are being
migrated to X86 in the aforementioned server additions, plus the dev Sun
boxes as well, are all x86 dual proc servers running Oracle, application
servers, or web services as needed. All servers run XFS from images
we've got on our SystemImager servers."
</P>
<P>
"All production back-end servers are connected via FC1 or FC2 to a SAN
containing ~13TB of raw storage, which, will soon be converted from VxFS
to XFS with the migration of Oracle to our x86 platforms."
</P>


<A NAME="Evolt.org">
<H2><a href="http://evolt.org">Evolt.org</a></H2>
<P>
"evolt.org, a world community for web
developers promoting the mutual free exchange of ideas, skills and
experiences, has had a great deal of success using XFS. Our primary
webserver which serves 100K hosts/month, primary Oracle database with
~25Gb of data, and free member hosting for 1000 users haven't had a minute
of downtime since XFS has beeen installed. Performance has been
spectacular and maintenance a breeze."
</P>

<FONT SIZE=-1>
<I>All testimonials on this page represent the views of the submitters,
and references to other products and companies should not be construed
as an endorsement by either the organizations profiled, or by SGI.
All trademarks (r) their respective owners.</I>
</FONT>

<& xfsTemplate,bottom=>1 &>