[BACK]Return to faq.html CVS log [TXT][DIR] Up to [Development] / xfs-website

File: [Development] / xfs-website / faq.html (download) (as text)

Revision 1.73, Sun Jul 7 04:10:55 2002 UTC (15 years, 3 months ago) by xfs
Branch: MAIN
Changes since 1.72: +6 -0 lines

Add Slackware to the list of distros with native XFS support

<& xfsTemplate,top=>1,side=>1 &>

<FONT FACE="ARIAL NARROW, HELVETICA" SIZE="5"><B>Linux XFS FAQ</B></FONT>
<FONT FACE="ARIAL NARROW, HELVETICA">
<P>
Quick links:
</P>
<ul>
   <li>Where is:<br>
    <a href="#wherefaq">Where Can I find this FAQ?</A><br>
    <a href="#aboutxfs">Where can I find information about XFS for linux?</A><br>
    <a href="#acldocs">Where can I find documentation about acls?</A><br>
    <a href="#xfsdocs">Where can I find documentation about XFS?</A><br>
    <a href="#nativesupport">Where can I find Linux distributions that natively support XFS?</A><br>
    <a href="#debianbf">Where can I find Debian boot/install disks?</A><br>
    <a href="#slackware">Where can I find Slackware boot/install disks?</A><br>
    <a href="#redhat">Where can I find Red Hat boot/install disks?</A><br>
    <a href="#stablexfspatches">Where can I find stable XFS patches?</A><br>
    <a href="#snapxfspatches">Where can I find snapshot XFS patches for kernel-x.y.z?</A><br>
    <a href="#xfsbootdisks">Where can I find pre made XFS boot disks?</A><br>
    <a href="#searcharchive">Where can I find a searchable mail archive?</A><br>
    <a href="#cbindings">Where can I find C++ bindings for Extended Attributes</A><br>
    <a href="#diskimages">Where can I find a tool to create disk images from a XFS filesystem?</A><br>
   </li>
   <li>What is:<br>
    <a href="#whatisxfs">What is XFS?</A><br>
    <a href="#needxfs">What do I need to build an XFS ready kernel?</A><br>
    <a href="#whatcmds">What are all those other things in the cmd/xfs directory?</A><br>
   </li>
   <li>Building issues:<br>
    <a href="#compilersissues">Are there any known issues about gcc for compiling the XFS kernel tree?</A><br>
    <a href="#aic7xxxcrash">The Adaptec aic7xxx driver crashes when booting this kernel.</A><br>
    <a href="#aic7xxxcompile">The Adaptec aic7xxx does not compile.</A><br>
    <a href="#kernelcompile">The kernel does not compile.</A><br>
    <a href="#nvidia">Do the nvidia drivers work with the XFS tree?</A><br>
    <a href="#jfsplusxfs">Can I use JFS and XFS together?</A><br>
   </li>
   <li>Functionality issues:<br>
    <a href="#platformsrun">Does it run on platforms other than i386?</A><br>
    <a href="#quotaswork">Do quotas work on XFS?</A><br>
    <a href="#dumprestore">Is there any dump/restore for XFS?</A><br>
    <a href="#lilowork">Does LILO work with XFS?</A><br>
    <a href="#grubwork">Does GRUB work with XFS?</A><br>
    <a href="#usexfslvm">Can I run XFS on top of LVM?</A><br>
    <a href="#usexfsloop">Can I use XFS on loopback block devices?</A><br>
    <a href="#usexfsroot">Can XFS be used for a root filesystem?</A><br>
    <a href="#xfsworkraid">Does XFS run on hardware RAID?</A><br>
    <a href="#xfsworknfs">Can I run NFS on top of an XFS partition?</A><br>
    <a href="#doesxfsworkmd">Does XFS run with the linux software RAID md driver?</A><br>
    <a href="#largefilesupport">Does XFS support large files (bigger then 2GB)?</A><br>
    <a href="#useirixdisks">Will I be able to use my old IRIX XFS disks on linux?</A><br>
    <a href="#rsyncuse">Can I use rsync to download XFS?</A><br>
    <a href="#blocksize">What does the warning mean that mkfs.xfs gives about the blocksize?</A><br>
    <a href="#fullfs">Does the filesystem slow down when it is nearly full?</A><br>
    <a href="#undelete">Does the filesystem have a undelete function?</A><br>
   </li>
   <li>General questions:<br>
    <a href="#usexfs">How do I use XFS?</A><br>
    <a href="#stablexfs">How stable is XFS?</A><br>
    <a href="#whatpartitionxfs">What partition type should I use for XFS?</A><br> 
    <a href="#resizexfspartition">Is there a way to make a XFS filesystem larger or smaller?</A><br>
    <a href="#linuskernel">When will XFS be included in the mainstream linus kernel?</A><br>
    <a href="#latest">What version is your CVS tree currently based on?</A><br>
    <a href="#mountoptions">What mountoptions does XFS have?</A><br>
    <a href="#problemreport">What info should I include when reporting a problem?</A><br>
   </li>
   <li>Problems:<br>
    <a href="#xfsmountfail">Mounting the XFS filesystem does not work - what is wrong?</A><br>
    <a href="#hangprocess">Processes are hanging when acting on an XFS filesystem (for instance in sv_wait) - what is this?</A><br>
    <a href="#rh7syscall">Why does my Red Hat 7.0 system not boot with the XFS beta kernel?</A><br>
    <a href="#xfschecks">My filesystem is ok - but xfs_check or xfs_repair shows errors/won't run - whats wrong here?</A><br>
    <a href="#xfsdbench">I am getting very bad dbench (etc.) results with XFS - whats wrong here?</A><br>
    <a href="#longumount">Mounting or umounting an XFS root filesystem takes a very long time - why?</A><br>
    <a href="#lvmgrowfs">When growing a lvm volume with xfs_growfs multiple times it fails, why?</A><br>
    <a href="#nfspermissions">NFS permissions seem to be reset for no apperent reason?</A><br>
    <a href="#xfsprogskernel">Is there any relation between the xfs utilities and the kernel version?</A><br>
    <a href="#xfsfitfloppy">Why doesn't my XFS kernel fit on a bootfloppy that I make with mkbootdisk?</A><br>
    <a href="#backingupxfs">How can I backup an XFS filesystem and acls?</A><br>
    <a href="#vmwarelock">VMware says that it does not know if the filesystem supports locking?</A><br>
    <a href="#rpmdb">Rebuilding a RPM database makes it very large, why?</A><br>
    <a href="#error990">I see applications returning error 990, what's wrong?</A><br>
    <a href="#forceshutdown">I see a xfs_force_shutdown message in the dmesg or system log, what is going wrong?</A><br>
    <a href="#nulls">Why do I see binary NULLS in my files after recovery when I unplugged the power?</A><br>
   </li>
</ul>
    <A name="wherefaq">
<h2>
Q: Where can I find this FAQ?
</h2>
<P>
Currently at:
</P>
<ul>
    <A HREF="http://oss.sgi.com/projects/xfs/faq.html">http://oss.sgi.com/projects/xfs/faq.html</A>
</ul>
<P>
If you have any comments or suggestions just let me know:
</P>
<ul>
    <A HREF="mailto:seth.mos@xs4all.nl">seth.mos@xs4all.nl</A>
</ul>
    <A name="whatisxfs">
<h2>
Q: What is XFS?
</h2>
<P>
XFS is a journalling filesystem developed by SGI and used in SGI's
IRIX operating system. It is now also available under GPL for linux. 
It is extremely scalable, using btrees extensively to support large
and/or sparse files, and extremely large directories. The journalling
capability means no more waiting for fsck's or worrying about meta-data
corruption.
</P>
    <A name="aboutxfs">
<h2>
Q: Where can I find information about XFS for linux?
</h2>
<P>
Just point your web-browser to:
</P>
<ul>
    <A HREF="http://oss.sgi.com/projects/xfs/">http://oss.sgi.com/projects/xfs/</A>
</ul>
<P>
You could also join the <A HREF="mail.html">linux-xfs mailng list</A>,
and we also have an IRC channel on irc.openprojects.net, #xfs.
</P>
    <A name="needxfs">
<h2>
Q: What do I need to build an XFS ready kernel?
</h2>
<P>
The best way to do this is to checkout the SGI XFS kernel from
their CVS tree. How to do this is described at
</P>
<ul>
    <A HREF="http://oss.sgi.com/projects/xfs/cvs_download.html">http://oss.sgi.com/projects/xfs/cvs_download.html</A>
</ul>
<P>
After that you have two subtrees of importance: linux and cmd. The
first one - linux - is a normal linux kernel source tree containing
the XFS code. It is updated to the latest available linux kernel
but it may be a bit (if not much) behind the official release. Just build
your kernel the way you are used to do it and don't forget to enable
XFS and pagebuf under filesystems.<br>
The other tree - cmd - contains all the tools you need
- most important: <tt>mkfs.xfs</tt> and <tt>xfs_repair</tt> - just go
to the xfs subdirectory in cmd and type make (maybe followed by make
install to install the binaries). The cmd tree has been restructured
a bit at the beginning of 2001: now you have to go into the various
subdirs (xfsprogs, attr, acl, xfsdump, dmapi) and run the make or the
below described Makepkgs script there. One thing which is also important
to note is that you need to have the e2fsprogs-devel package installed in
order to build the cmd tools - because they require the uuid stuff in
them. You may also build src and binary rpms by running:
</P>
<pre>
    ./Makepkgs verbose
</pre>
<P>
in the cmd/xfs or cmd/xfsprogs, attr, ...] directory.  The tools also
have man-pages which you may consult for interesting options.
</P>
<P>
There exists also another way to get an XFS ready kernel - you may
get kernel patches relative to a official kernel from:
</P>
<ul>
    <A HREF="ftp://oss.sgi.com/projects/xfs/download/">ftp://oss.sgi.com/projects/xfs/download/</A>
</ul>
<P>
and apply them to the kernel sources the patch is for. This is a
good way for all the people who don't want to use CVS or do not
have the bandwidth to checkout the whole kernel tree.
</P>
<P>
A third way to get a XFS ready system and kernel is to use the
prepared rpm's and Red Hat Linux installer ISO images from
SGI which you can in the download area. 
</P>
    <A name="usexfs">
<h2>
Q: How do I use XFS?
</h2>
<P>
You will also find a little HOWTO about the nessecary steps in the download area.
</P>
<P>
Just reboot the new built kernel and create a filesytem on an empty
partition
</P>
<pre>
    mkfs -t xfs /dev/foo
</pre>
<P>
where /dev/foo is the the partition you want to use (you may have
to use the -f option of <tt>mkfs -t xfs</tt> (which calls mkfs.xfs from
the cmd directory which you must have installed) if this partition already
contains an old filesystem which you want to overwrite). Now you can
mount the filesystem using:
</P>
<pre>
    mount -t xfs /dev/foo /somewhere
</pre>
    <A name="stablexfs">
<h2>
Q: How stable is XFS?
</h2>
<P>
It is stable and being used in production systems on a large range of 
hardware. From small systems to big multiprocessing systems with 
gigabytes of ram and Terabytes of diskspace.
</P>
    <A name="platformsrun">
<h2>
Q: Does it run on platforms other than i386?
</h2>
<P>
The current XFS tree seems to work just fine on ppc now (aside from some
trivial compile fixes). It also runs well and is getting sporadically
tested on the alpha, sparc64 and ia64. But on all of those platforms it is
not as well tested as on i386, but so far there are no major problems
on those platforms known.
All in all it looks like XFS will be running across a lot of platforms
fine soon (with all the platforms above we have 32/64bit and
little/big-endian architectures supported. If you run it on a platform
not mentioned here please let me know so that i can add it.
Also an important note is that XFS is inherently platform
independent in the on disk layout - so it should be possible to move
a XFS disk from one linux platform to another out of the box.
</P>
    <A name="quotaswork">
<h2>
Q: Do quotas work on XFS?
</h2>
<P>
User and group quotas are supported. 
</p>
<p>To use quotas with XFS, you need to enable linux quota support, and
XFS quota support when you configure your kernel. You also need to specify 
quota support when you mount.<br>
The linux quotatools now include XFS quota controls. You can get the quota 
utilities at their sourceforge website <a href="http://sourceforge.net/projects/linuxquota/">
http://sourceforge.net/projects/linuxquota/</a>.
There are problems with the 1.0 release quota support. Upgrade to a later kernel version
or XFS release to make it work.
</P>
    <A name="dumprestore">
<h2>
Q: Is there any dump/restore for XFS?
</h2>
<P>
xfsdump and xfsrestore are now in the CVS tree. The tape format is
the same as for xfsdump and xfsrestore on IRIX and dump tapes should
be interchangable between systems. 
</p>
<p>
The tape format is <em>not the same</em> as the classic Unix dump but should 
work fine with tools like Amanda. Dumps produced with standard 
other dump programs should be able to be restored onto an XFS filesystem 
using the coresponding restore program.
</P>
    <A name="lilowork">
<h2>
Q: Does LILO work with XFS?
</h2>
<P>
This depens on where you install LILO.
For MBR installation:  Yes.
For root partitions: No, because the XFS superblock goes where LILO
would be installed.  This is to maintain compatibility with the Irix
on-disk format. This will not be changed. Putting the Superblock on
the swap partition is reported to work but not guaranteed.
</P>
    <A name="grubwork">
<h2>
Q: Does GRUB work with XFS?
</h2>
<P>
Yes there is native XFS filesystem support for GRUB starting with version 0.91 and up.
There is a GRUB rpm that supports XFS in the download section for the 1.0.2 installer on the FTP sites.
</P>
    <A name="usexfslvm">
<h2>
Q: Can I run XFS on top of LVM?
</h2>
<P>
Yes XFS should run fine on top of LVM. If you plan to do so please keep
in mind that the 1.0 and 1.0.1 release XFS tree (and also the XFS 1.0 previews)
contains lvm 0.9beta6. This has recently changed to 1.0.1rc4 in CVS.
This code has some tweaks for XFS. The 1.0.2 installer ships with 1.0.1rc4 as well.
The snapshotting should work, please report problems to the mailinglist.
</P>
    <A name="usexfsloop">
<h2>
Q: Can I use XFS on loopback block devices?
</h2>
<P>
Yes. If you are using a 2.4.2 based XFS kernel you need to apply Jens
Axboes loop-xfs-7c fix for it to work (the fix is for a problem in
2.4.2 and has nothing really to do with XFS). You may get this patch from
</P>
<ul>
    <A HREF="ftp://ftp.kernel.org/pub/linux/kernel/people/axboe/xfs/">ftp://ftp.kernel.org/pub/linux/kernel/people/axboe/xfs/</A>
</ul>
<P>
</P>
    <A name="usexfsroot">
<h2>
Q: Can XFS be used for a root filesystem?
</h2>
<P>
Yes.<br>
A document describing how to convert your system to XFS is currently
being worked on. The current state of this document you may find at
</P>
<ul>
    <A HREF="http://www.linuxdoc.org/HOWTO/Linux+XFS-HOWTO/">http://www.linuxdoc.org/HOWTO/Linux+XFS-HOWTO/</A>
</ul>
<P>
You might also have a look at the Linuxcare Bootable Toolbox which also supports XFS starting from version 2.0<br>
</P>
<ul>
    <A HREF="http://lbt.linuxcare.com">http://lbt.linuxcare.com</A>
</ul>
<P>
</P>
    <A name="whatpartitionxfs">
<h2>
Q: What partition type should I use for XFS? 
</h2>
<P>
Linux native filesystem (83).
</P>
    <A name="xfsmountfail">
<h2>
Q: Mounting the XFS filesystem does not work - what is wrong?
</h2>
<P>
If you get something like:
</P>
<pre>
    mount: /dev/hda5 has wrong major or minor number
</pre>
<P>
you either don't have XFS compiled into the kernel (or you forgot
to load the modules) or you did not use the "-t xfs" option on mount
or the "xfs" option in <tt>/etc/fstab</tt>.<br>
If you get something like:
<pre>
mount: wrong fs type, bad option, bad superblock on /dev/rd/c0d0p1,
       or too many mounted file systems
</pre>
and from /var/log/messages:
<pre>
XFS: bad magic number
XFS: SB validate failed
</pre>
This means that you can not mount the filesystem due to corruption. 
You will need to run xfs_repair and hope it can be 
repaired. If you hit this you have serious problems. It can be 
anything from the disks have failed in mysterious 
ways, software raid has gone mad, corruption through bad 
cables/drivers/DMA etc.
</p>
<p>
If the mount hangs you can use xfs_repair -L to zero the journal 
to let the system mount the disks again. To date this is only 
observed on the /var filesystem. We do not know yet what is causing 
these hangs (01-06-2002). Please contact the mailinglist 
when you observe this failure. It is a very rare problem which 
makes is hard to debug
</p>
    <A name="doesxfsworkmd">
<h2>
Q: Does XFS run with the linux software RAID md driver?
</h2>
<P>
Yes - the current XFS tree contain everything you need to run 
XFS on top of any md RAID level. Note that write performance 
using XFS on top of software raid level 5 is bad using anything 
lower then 2.4.18. Using a external log device returns the 
performance to normal. You could solve this by making a seperate 
md raid 1 on the disks of about 50 MB and using the rest of the 
space for the raid 5 volume. In this scenario you will have normal 
performance. In kernels => 2.4.18 there are fixes which help performance 
a lot. Using a external log is still faster but the penalty is
smaller.
</P>
    <A name="xfsworkraid">
<h2>
Q: Does XFS run on hardware RAID?
</h2>
<P>
Yes - hardware RAID is like a normal disk to XFS (like to any other
filesystem) and this should not be a problem.
</P>
    <A name="xfsworknfs">
<h2>
Q: Can I run NFS on top of an XFS partition?
</h2>
<P>
Yes. To get good performance make shure to use an XFS tree from after
mid-march 2001. There were some important fixes for useable NFS-performance.
So far there are no more known problems with XFS and NFS since then.<br>
If you are still using a 1.x.x release we suggest upgrading to the 1.1 release 
which also fixes a lot of crashes of older kernels under load.
</P>
    <A name="resizexfspartition">
<h2>
Q: Is there a way to make a XFS filesystem larger or smaller?
</h2>
<P>
<br>
You can <em>NOT</em> make a XFS partition smaller. The only way to do so would
be a complete dump, format and restore. If anyone is feeling adventerous contact 
the mailinglist. People will be glad to help out and give some directions.
</p>
<p>
An XFS filesystem may be enlarged within a partition using 
<tt>xfs_growfs</tt>. You need to have free space after this partition to do so.
Remove partition recreate it larger with <em>exact same</em> starting point.
run xfs_growfs to make the partition larger. This operation is dangerous to your data
Back up your filesystem before using this tool.<br>
Using XFS filesystems on top of LVM makes this a lot easier.
</P>
    <A name="useirixdisks">
<h2>
Q: Will I be able to use my old IRIX XFS disks on linux?
</h2>
<P>
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously,
you should back-up your data before trying to move it between systems.
Filesystems must be "clean" when moved (ie unmounted correctly). If
plan to use IRIX disks on linux keep the followng things in mind: the
kernel needs to have SGI partition support enabled (to find in the
File systems -> Partition Types submenu of a "make menuconfig"), there
is no XLV support in linux, so you won't be able to read IRIX disks
using the XLV volume manager, also not all blocksizes available on
IRIX are available on linux for now (only the pagesize of the
architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is
possible for now). Make sure that the directory format is version 2 on 
the Irix disks. Linux can only read v2 
directories on the moment. Using v1 will probably fail in spectacular ways.
The TODO list has a item for mounting disks where blocksize < pagesize but 
it is way down the list. Support for 
blocksize > pagesize needs to have the linux kernel reworked to support this.
This might come somewhere in 2.5.x
B</p>
<p>
The only real caveat here is that at the moment XFS on linux only supports
filesystems where the disk block size equals the page size. <br>
It is a restriction of the Linux VM subsystem, at the moment file block
size must equal hardware page size.  There is work in progress to
remove the restriction, but it is kernel 2.5 code.
</P>
    <A name="hangprocess">
<h2>
Q: Processes are hanging when acting on an XFS filesystem (for
instance in sv_wait) - what is this?
</h2>
<p>
Recompile your kernel with gcc 2.91.66(kgcc) or 2.95.3 or later. 
See the other entry about the use of compilers.
</P>
    <A name="compilersissues">
<h2>
Q: Are there any known issues about gcc for compiling the XFS kernel tree?
</h2>
<p>
Yes. So far there were some problems reported with kernels built with
gcc 2.95.2 which were solved by compiling it with egcs 2.91.66.<br>
Please note that the problems with gcc 2.95.2 seem to be restriced 
to the i386 platform - on the ppc it works just fine with 2.95.2 
for instance.<br>
People that are running Debian, SuSE or Slackware should make sure that 
they are using at least 2.95.3 or later.<br>
People running on Red Hat Linux should use gcc => 2.96-85 or later which 
is provided as a errata update on the Red Hat update site.<br>
Earlier versions had issues ranging from oopses to hangs and fs corruption.<br>
* NOTE: Gcc 2.91.66 (kgcc) is the most tested and known working compiler with 
respect to XFS. All the releases so far (including 1.1) are built using the 
kgcc compiler.<br>
</P>
<p>
If you are using gcc 3.0 and it gives problems or does not compile, drop a note on
the list with the oops and ksymoops output. We will be working on getting XFS fully
functional with gcc 3.0. The 3.0.1 seems to produce correct kernels as well.<br>
Do note that gcc 3.x is still experimental with respect to the linux-kernel.
</p>
    <A name="whatcmds">
<h2>
Q: What are all those other things in the cmd/xfs directory?
</h2>
<p>
Some of them are other interesting tools like: db - <tt>xfs_db</tt> is a xfs
filesystem debugger (working) , copy - <tt>xfs_copy</tt> is tool for 
effectively copying one filesystem to another device (not yet ported, volunteer wanted), 
fsr - <tt>xfs_fsr</tt> is a defragmenter for xfs (working), 
repair - <tt>xfs_repair</tt> is the consistency checker for an xfs 
filesystem (working). As already mentioned earlier: the cmd structure
has changed a bit at the beginning of 2001 - now it all looks a bit
clearer i think (modeled a bit after the ext2 tools structure). The
only other subdir is: xfstests - the SGI XFS stress test suite.
</p>
    <A name="rh7syscall">
<h2>
Q: Why does my Red Hat 7.0 system not boot with the XFS beta kernel?
</h2>
<p>
This is because there is a syscall conflict which results in this
problem. Please use the latest XFS code from the cvs tree which has
this fixed.
</p>
    <A name="xfschecks">
<h2>
Q: My filesystem is ok - but xfs_check or xfs_repair shows errors/won't run - whats wrong here?
</h2>
<p>
You can not run xfs_repair on a mounted filesystem although support is available in CVS (08-02-2002) that let's you 
run xfs_repair with the -n switch on a read-only mounted filesystem. You must not try to repair a mounted fs since 
this will result in dataloss and corruption when attempted.<br>
If you have to repair you're filesystem it needs to be unmounted first. If this is the root (/) filesystem this will 
mean you have to boot from a bootable CD and perform the repair on the unmounted filesystem. There is a link to various 
bootable floppy/cd projects mentioned in the FAQ <a href="#xfsboot">here</a>.
</P>
    <A name="xfsdbench">
<h2>
Q: I am getting very bad dbench (etc.) results with XFS - whats wrong here?
</h2>
<P>
First dbench: this is a very metadata intensive benchmark which might
be a little limited by the default log size. You will getter better
results by creating the filesystems with
</P>
<P>
<PRE>
    mkfs -t xfs -l size=32768b -f /dev/device
</PRE>
</P>
<P>
This creates a bigger logspace. You currently cannot resize the log with xfs_growfs. You will need to remake the fs 
for this option. Also it is a good idea to mount meta-data intensive filesystems with
</P>
<P>
<PRE>
    mount -t xfs -o logbufs=8,logbsize=32768 /dev/device /mountpoint
</PRE>
</P>
<P>
Note that in some earlier versions you may only use 4 logbufs as a maximum.
Using more logbufs can fail if your system has not enough ram. Using 8 
logbufs on a machine with 128MB ram wil probably fail.
Also since mid-march there are some changes in the tree which improve
the overall dbench performance a bit.
Have a look for the xfs.txt file in the Documentation/filesystems
subdirectory of your kernel sources for those and other XFS mount
options.
</P>
<P>
In general you should get about the same or better performance values
with XFS in various benchmarks. One thing XFS is usually bad at is
removing large amounts of files (rm -rf or bonnie++). Kernels => 2.4.18 
have a asynchronous delete patch which speeds up large deletes.
</P>
    <A name="longumount">
<h2>
Q: Mounting or umounting an XFS root filesystem takes a very long time - why?
</h2>
<P>
This is fixed in the current code (mid-march 2001). But also before this is
normal and harmless - it only takes a bit of time (on boot and halt with
SuSE startup scripts - only on halt with Red Hat based scripts).
</P>
    <A name="aic7xxxcrash">
<h2>
Q: The Adaptec aic7xxx driver crashes when booting the kernel.
</h2>
<P>
It produces the following error.
<pre>
SCSI subsystem driver Revision: 1.00
PCI: Found IRQ 11 for device 00:0c.0
scsi0: Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 6.1.13
        <Adaptec 2940 Ultra2 SCSI adapter>
        aic7890/91: Ultra2 Wide Channel A, SCSI Id=7, 32/255 SCBs
ahc_intr: AWAITING_MSG for an SCB that does not have a waiting message
SCSIID = 7, target_mask = 1
Kernel panic: SCB = 3, SCB Control = 40, MSG_OUT = 80 SCB flags = 6000
In interrupt handler - not syncing
</pre>
The Adaptec driver in 2.4.5 and later needs to have the following
selected in order to work.
<pre>
[ ]   Build Adapter Firmware with Kernel Build (NEW)
</pre>
</P>
    <A name="aic7xxxcompile">
<h2>
Q: The Adaptec aic7xxx does not compile.
</h2>
<P>
It spits out the following error during compile
<pre>
gcc -I/usr/include -ldb1 aicasm_gram.c aicasm_scan.c aicasm.c
aicasm_symbol.c -o aicasm aicasm_symbol.c:39: db1/db.h: No such file or
directory
make[5]: *** [aicasm] Error 1
</pre>
The Adaptec driver in newer 2.4.2 kernels and later need to have the db headers.
These can be found in the db-devel packages.
</P>
<P>
Or it produces the error that it cannot find two header files (.h) in the build 
directory. In this case you should activate the following option.
<pre>
[ ]   Build Adapter Firmware with Kernel Build (NEW)
</pre>
Make sure to run make mrproper after selecting this option.
</p>
    <A name="kernelcompile">
<h2>
Q: The kernel does not compile.
</h2>
<p>
Before trying to compile the cvs tree do a
<pre>
make mrproper
</pre>
If you also copied your own .config into the tree make sure to run
<pre>
make oldconfig
</pre>
</p>
    <A name="largefilesupport">
<h2>
Q: Does XFS support large files (bigger then 2GB)?
</h2>
<p>
Yes, XFS supports files larger then 2GB. The large file support (LFS) is 
largely dependent on the C library of your computer. Glibc 2.2 and higher has 
full LFS support. If your C lib does not support it you will get errors that
the valued is too large for the defined data type.<br>
Userland software needs to be compiled against the LFS compliant C lib in 
order to work. You will be able to create 2GB+ files on non LFS systems but the 
tools will not be able to stat them.<br>
Distributions based on Glibc 2.2.x and higher will function normally. Note that
some userspace programs like tcsh do not correctly behave even if they are compiled
against glibc 2.2.x<br>
You may need to contact your vendor/developer if this is the case.
</p>
<p>
Here is a snippet of email conversation with Steve Lord on the topic of the maximum 
filesize of XFS under linux.
<pre>
I would challenge any filesystem running on Linux on an ia32, and using
the page cache to get past the practical limit of 16 Tbytes using buffered
I/O. At this point you run out of space to address pages in the cache since
the core kernel code uses a 32 bit number as the index number of a page in the
cache.

As for XFS itself, this is a constant definition from the code:

#define XFS_MAX_FILE_OFFSET ((long long)((1ULL<<63)-1ULL))

So 2^63 bytes is theoretically possible.

All of this is ignoring the current limitation of 2 Tbytes of address
space for block devices (including logical volumes). The only way to
get a file bigger than this of course is to have large holes in it.
And to get past 16 Tbytes you have to used direct I/O.
</pre>
Which would would mean a theoretical 8388608TB file size. Large enough?
</p>
    <A name="lvmgrowfs">
<h2>
Q: When growing a lvm volume with xfs_growfs multiple times it fails, why?
</h2>
<p>
At least one user has experienced problems when resizing his lvm volume
multiple times. The workaround is to run xfs_repair after each xfs_growfs.
They are not able to replicate this at SGI on the moment.
</p>
    <A name="nfspermissions">
<h2>
Q: NFS permissions seem to be reset for no apperent reason?
</h2>
<p>
The 2.4.5 tree from beginning of june 2001 had a problem that permissions over 
NFS were reset. This is fixed in the development tree.
</p>
    <A name="xfsprogskernel">
<h2>
Q: Is there any relation between the xfs utilities and the kernel version?
</h2>
<p>
No, there is no relation. Newer utilities only have fixes and checks the previous versions
might not have. These are the same utilities that have been used under Irix for years so they
have been well developed.<br>
With the introduction of the new ACL interface and the XFS 1.1 release you will need the 2.0+ userspace utilities to correctly 
use te ACL support.
</p>
    <A name="xfsfitfloppy">
<h2>
Q: Why doesn't my XFS kernel fit on a bootfloppy that I make with mkbootdisk?
</h2>
<p>
XFS is a huge amount of kernel code which means that your kernel is probably 
to big to fit on a boot floppy together with a inital ramdisk and scsi drivers.
There is patch available for mkbootdisk to properly format a floppy with a larger size.
Your mileage may very on booting overformatted floppy's of 1.68 or 1.72MB.
The patch can be found at <a "href=http://iserv.nl/files/xfs/mkbootdisk.large.patch">
http://iserv.nl/files/xfs/mkbootdisk.large.patch</a> Use the /dev/fd0u1680 for making 
a 1.68MB floppy.<br>
Another method is building XFS as module but this is probably more hassle then trying to
boot a superformatted floppy.
</p>
    <A name="xfsbootdisks">
<h2>
Q: Where can I find pre made XFS boot disks?
</h2>
<p>
Kelly Eicher has made a boot floppy set available on his homepage
<a href="http://www.astro.umn.edu/~carde">http://www.astro.umn.edu/~carde</a>
These are very helpful and easy to use in migrating or repairing a system.
</p>
<P>
You might also have a look at the Linuxcare Bootable Toolbox which also supports XFS starting from version 2.0<br>
</P>
<ul>
    <A HREF="http://lbt.linuxcare.com">http://lbt.linuxcare.com</A>
</ul>
<P>
</P>
    <A name="backingupxfs">
<h2>
Q: How can I backup an XFS filesystem and acls?
</h2>
<p>
You can backup a XFS filesystem with utilities like xfsdump and standard tar for 
standard files. If you want to backup acls you will need to use xfsdump.
This is the only tool at the moment that supports backing up of acls. 
Support for XFS and acls is underway at several commercial backup tools.
xfsdump can be made to work with amanda.
</p>
    <A name="linuskernel">
<h2>
Q: When will XFS be included in the mainstream linus kernel?
</h2>
<p>
Good question, there still is a decent amount of work to be done to make the
xfs patch less intrusive on the standard Linus kernel. We have splitted the patches into multiple parts so it is easier 
to apply and read up on by developers. You can now also opt out of the kdb patches if you want.
We might even be included at some point in the 2.4 series when some 2.5 features are backported.
We _are_ sending patches to Linus and Alan but nothing is coming back yet ;)<br>
Maybe if people tell how much they like it and that it _works_ for you they might be convinced.
XFS also interacts with the VM very closely because of it's rich feature set. A lot of functionality is available now in 
the XFS patch, the framework in the linus kernel is slowly getting there. Work is in progress on creating a single ACL 
space as well as extended attributes work. It will probably be included in the 2.5 tree but in the mean time 
there will be patches made available for patching a Linus 2.4 kernel with XFS.<br>
The easiest way to get the "latest and greatest" is by checking the out the xfs development tree by cvs.<br>
</p>
    <A name="vmwarelock">
<h2>
Q: VMware says that it does not know if the filesystem supports locking?
</h2>
<p>
XFS supports locking but it is not known to VMware. It has been reported to 
VMware but the status is unknown at this point. If you follow the instructions that
VMware gives when starting up you should be fine.
</p>
    <A name="acldocs">
<h2>
Q: Where can I find documentation about acls?
</h2>
<p>
Take a look at http://acl.bestbits.at/pre and get the posix 1003.1e
draft.
</p>
    <A name="xfsdocs">
<h2>
Q: Where can I find documentation about XFS?
</h2>
<p>
There are some papers available on the SGI XFS project page (mentioned
above) and I have set up an LXR indexed version of the SGI XFS CVS tree
at:
</P>
<ul>
    <A HREF="http://innominate.org/~graichen/projects/lxr/source/?v=xfs">http://innominate.org/~graichen/projects/lxr/source?v=xfs</A>
</ul>
<P>
which might help to find an easy and good way through
the sources. I plan to keep this tree automatically updated to the
current SGI XFS CVS version on a daily basis.
If anyone has pointers to other XFS related docs - just send me a
mail (address - see above).
</P>
<p>
So for those of you with money to spare:<br>

<a href="http://www1.fatbrain.com/asp/bookinfo/bookinfo.asp?theisbn=1400512638&vm=">
http://www1.fatbrain.com/asp/bookinfo/bookinfo.asp?theisbn=1400512638&vm=</a>

And for those of you who are cheapskates:<br>

<a href="http://techpubs.sgi.com:80/library/tpl/cgi-bin/download.cgi?coll=0530&db=bks&pth=/SGI_Admin/XFS_AG">
http://techpubs.sgi.com:80/library/tpl/cgi-bin/download.cgi?coll=0530&db=bks&pth=/SGI_Admin/XFS_AG</a><br>

I think this is externally accessible, and you can get the pdf form of the
book from here for the price of a 500K download.
</p>
    <A name="debianbf">
<h2>
Q: Where can I find Debian boot/install disks?
</h2>
<p>
Eduard Bloch <mailto:blade@debian.org> has an up-to-date set of
boot floppies, and a CD image, at
<A HREF="http://people.debian.org/~blade/XFS-Install/">
http://people.debian.org/~blade/XFS-Install/</A>
<P>
The original debian boot disks by Zoltan Kraus are mirrored at
<A HREF="http://www.physik.tu-cottbus.de/~george/woody_xfs/">
http://www.physik.tu-cottbus.de/~george/woody_xfs/</A>
    <A name="nvidia">
<h2>
Q: Do nVIDIA drivers work with XFS?
</h2>
<p>
Yes, the nVIDIA drivers work fine on XFS systems. Be sure to use the 
1.0-1251 release or later of the nVIDIA linux drivers.<br>
These are known to work starting from 2.4.0-test5 and later.
The FAQ maintainer (me) uses these nVIDIA drivers on his own XFS system so 
you will get notice which version does not work.<br>
The drivers can be found at <a 
href="http://www.nvidia.com/">http://www.nvidia.com/</a>
</p>
<p>
If you are using the 1.0 release of XFS I suggest disabling devfs. Devfs can be disabled by editing your lilo.conf and
inserting the following line
<pre>
append="devfs=nomount"
</pre>
This will prevent devfs from interfering. Other people suggest iserting the following magic in /etc/rc.d/rc.local (on 
Red Hat systems).
<pre>
major=195
for i in 0 1 2 3; do
    devfile="/dev/nvidia$i"
    rm -f $devfile
    if ! mknod $devfile c $major $i || ! chmod 0666 $devfile; then
        echo "Couldn't create device \"$devfile\"."
        exit 1
    fi
done
devfile=/dev/nvidiactl
rm -f $devfile
mknod $devfile c $major 255
chmod 0666 $devfile
</pre>
This will create the nvidia devices with each boot.
</p>
    <A name="slackware">
<h2>
Q: Where can I find slackware boot/install disks?
</h2>
<p>
There are multiple slackware boot disks available. The first is <a 
href="http://village.flashnet.it/users/fn048069/linux-xfs.html">
http://village.flashnet.it/users/fn048069/linux-xfs.html</a>. The page will explain what to do and what the disks 
contain. These author of these disks can be contacted at the following address daedalus@freemail.it<br>
The second is located at <a href="http://slackjfs.setcom.bg/">http://slackjfs.setcom.bg/</a> which has slackware 
install disks for most available Journaling filesystems including XFS, ReiserFS and JFS.
</p>
    <A name="stablexfspatches">
<h2>
Q: Where can I find stable XFS patches?
</h2>
<p>
Patches are available for each formal XFS release.  For the latest, please see
<a href="ftp://oss.sgi.com/projects/xfs/download/latest/kernel_patches/">
ftp://oss.sgi.com/projects/xfs/download/latest/kernel_patches/</a>.  If you don't see the kernel
version you want here, you may be interested in the <a href="#snapxfspatches">snapshot patches</a>.
</p>
    <A name="snapxfspatches">
<h2>
Q: Where can I find snapshot XFS patches for kernel-x.y.z?
</h2>
<p>
<i>Snapshot</i> patches for getting a kernel with XFS can be found on the FTP server in
<a href="ftp://oss.sgi.com/projects/xfs/download/patches">
ftp://oss.sgi.com/projects/xfs/download/patches</a>.
The most patches here are for the linus tree, patches for the -ac 
(Alan Cox) series are not available. Alan Cox can somtimes produce more
then 3 kernels a day which is a pace the SGI people can not keep up with.
If you want to make unofficial patches available for the -ac series and 
think you can keep up with the pace drop us a note on the list.<br>
When the VM madness subsides it should become easier to integrate XFS into a -ac kernel.
</p>
<p>
The patches you can find here are provided for recent linus kernels to 
either seed a cvs tree which makes it faster to make your own local CVS 
tree or patch a linus tree in to a XFS capable kernel tree.<br>
</p>
<p>
These patches are generally released for each new kernel version.  Read the README
file in the above URL for more information.
</p>
    <A name="latest">
<h2>
Q: What version is your CVS tree currently based on?
</h2>
<p>
Follow the following link to look what version is in the CVS tree.
<a href="http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.4-xfs/linux/Makefile?only_with_tag=HEAD">
The top level Makefile</a> for the current development CVS version.
</p>
    <A name="mountoptions">
<h2>
Q: What mount options does XFS have?
</h2>
<p>
There are a few mountoptions to influence a XFS filesystem described below.
<pre>
At mount time, there are really three options which will make a difference

	o biosize (in the released tree the default is 16 or 64K, in the
	  development tree the default is 12 or 4K). Making this larger
	  may help some applications, it will hinder others.

	o osyncisdsync - indicates that O_SYNC is treated as O_DSYNC, which
	  is the behavior ext2 gives you by default. Without this option,
	  O_SYNC file I/O will sync more metadata for the file.

	o logbufs=4 or logbufs=8, this increases (from 2) the number of
	  in memory log buffers. This means you can have more active
	  transactions at once, and can still perform metadata changes
	  while the log is being synced to disk. The flip side of this
	  is that the amount of metadata changes which may be lost on
	  crash is greater.
</pre>
</p>
    <A name="rpmdb">
<h2>
Q: Rebuilding a RPM database makes it very large, why?
</h2>
<p>
In the original 1.0 release the default biosize was 16 or 64K which resulted in a
possibility to make a _really_ large rpm database when rebuilsing it. This has been 
fixed in versions after 2.4.4. This will be fixed in the 1.0.1+ version.
</p>
    <A name="rsyncuse">
<h2>
Q: Can I use rsync to download XFS?
</h2>
<p>
Yes you can! Here is an example to fetch a release iso.
<pre>
rsync -avu oss.sgi.com::xfsftp/Release-1.0.1/installer/RH7.1-SGI-XFS-1.0.1.iso .
</pre>
Note that using compression may give problems.
</p>
    <A name="jfsplusxfs">
<h2>
Q: Can I use JFS and XFS together?
</h2>
<p>
Yes, this should work without to much trouble. Someone already has patches
for making these two work together.
<pre>
I'm running such a setup for several months already. This only works,
because Steve Best added a little tweak upon my request to get this going
because XFS modifies some type declaration that JFS depends on.
I'm maintaining patches with XFS plus JFS on my ftp server:
<a href="http://ftp.uni-duisburg.de:/Linux/filesys/">
ftp://ftp.uni-duisburg.de:/Linux/filesys/</a>
</pre>
or see here <a href="http://oss.sgi.com/projects/xfs/mail_archive/0107/msg00025.html">
http://oss.sgi.com/projects/xfs/mail_archive/0107/msg00025.html</a>
</p>
    <A name="searcharchive"><h2>
Q: Where can I find a searchable mail archive?
</h2>
<p>
For people that want a searchable archive you can find it at the following url.
<a href="http://marc.theaimsgroup.com/?l=linux-xfs&r=1&w=2">
http://marc.theaimsgroup.com/?l=linux-xfs&r=1&w=2</a>
</p>
    <A name="blocksize">
<h2>
Q: What does the warning mean that mkfs.xfs gives about the blocksize?
</h2>
<p>
<pre>
mkfs.xfs: warning - cannot set blocksize on block device /dev/hde1:
</pre>
You are doing nothing wrong. XFS is using an extra ioctl to set the block
size of the device, it is not implemented for this device. However, a
recent review of the code seems to show that we do not need actually
need the ioctl anymore. Since the filesystem was made anyway, this is a
message you should be able to ignore.

Try mounting the filesystem and see what happens.
</p>
    <A name="problemreport">
<h2>
Q: What info should I include when reporting a problem?
</h2>
<p>
Things to include are what version of XFS you are using, if this is 
a CVS version of what date and version of the kernel. If you have 
problems with userland packages please report the version of the 
package you are using. This also aplies to what distribution you are running.<br>
If you get fs corruption report in what files and which applications. If you 
experience an oops, please run it through ksymoops so it is actually understandable
for the developers what is going wrong.<br>
If you see strange errors in the log about things failing and see lockups be sure 
to report what hardware you are on and in what configuration.
</p>
<p>
The following document has some background on <a href="http://www.chiark.greenend.org.uk/~sgtatham/bugs.html">How 
to Report Bugs Effectively</a>.
</p>
    <A name="error990">
<h2>
Q: I see applications returning error 990, what's wrong?
</h2>
<p>
The error 990 stands for EFSCORRUPTED which usually means XFS has
detected a metadata problem on the disk and has shut it down.<br>
There should be a console message when this happens. Suspect hardware problems
or serious software problems. People using HP servers have noticed this corruption with a AMI megaraid
based raid controllers.
</p>
    <A name="fullfs">
<h2>
Q: Does the filesystem slow down when it is nearly full?
</h2>
<p>
XFS will slow down doing allocations when it is really full, you are
no where near full untill 99.x%. Basically XFS chops the
filesystem into allocation groups (1 to 4 Gbytes each), free space is
managed independently in each of these. The slowdown happens when you
have to scan through lots of allocation groups looking for space to
extend a file. There is an in memory summary structure which tells you
if it is worth even looking in an allocation group, so it is not a
major slowdown - unless you have lots of parallel allocation calls
going on at the time.
</p>
    <A name="undelete">
<h2>
Q: Does the filesystem have a undelete function?
</h2>
<p>
There is no undelete in XFS, in fact once you delete something, the chances
are the space it used to occupy is the first thing reused. Undelete is
really something you have to design in from the start. Getting anything back after 
a accidental rm -rf is near to impossible.
</p>
    <A name="forceshutdown">
<h2>
Q: I see a xfs_force_shutdown message in the dmesg or system log, what is going wrong?
</h2>
<p>
The following error is found in your system log.
<pre>
xfs_force_shutdown(ide0(3,8),0x1) called from line 4069 of file
xfs_bmap.c.  Return address = 0xc017fbcb
I/O Error Detected.  Shutting down filesystem: ide0(3,8)
Please umount the filesystem, and rectify the problem(s)
</pre>
This is error is common when XFS runs into IO error. This error can be caused by either hardware or software failure. 
The messages have more meaningfull messages to say wheter the corruption happenend in memory or whilst writing to disk. 
It is there to protect your data. Most of the time people run into a bad cluster on the disk.<br>
<br>
This is not always fatal since in some cases the newer IDE drives will map a bad cluster out to a new one that is spare.
This will be done untill all spare clusters inside the disk are gone and then will produce errors. Most scsi drives have 
had this feature since a long time.<br>
<br>
Note: If you have S.M.A.R.T. on your IDE disk and controller you can be notified when a drive is going bad. This does 
not always work right since some disks will only start reporting errors when all the spare clusters are gone while 
others start barking loudly and give warnings when it starts mapping out bad clusters in the first place.
Each disks manufacturer behaves different.<br>
<br>
What also can happen is that a bad cluster is detected but it is not restored untill the first powercycle/reboot. This 
is something that has been observed but should not happen. It's a very rare case. Maybe the folks at linux-ide.org can 
tell you something more.<br>
<br>
If you have a scsi system this will probably mean that a disk has gone bad. Raid systems should not see this error 
unless _very_ weird things happen or your driver is b0rken. If you are using software raid or LVM this can sometimes be 
a software problem. This has been observed once up to now on a software raid device. If you can replicate this error 
please report the problem on the list with as much related info as you can. If it produces a Oops please include the 
ksymoops output.
</p>
    <A name="nulls">
<h2>
Q: Why do I see binary NULLS in some files after recovery when I unplugged the power?
</h2>
<p>
If it hurts don't do that!
</p>
<p>
* NOTE: XFS 1.1 and kernels => 2.4.18 has the asynchronous delete path which 
means that you will see a lot less of these problems. If you still have not updated
to the 1.1 release or later, now would be a good time!<br>
<br>
Basically this is normal behavior. XFS journals metadata updates, not data
updates. After a crash you are supposed to get a consistent filesystem 
which looks like the state sometime shortly before the crash, NOT what
the in memory image looked like the instant before the crash. Since XFS
does not write data out to disk immediately unless you tell it to with
fsync or an O_SYNC open (the same is true of other filesystems), you 
are looking at an inode which was flushed out to disk, but for which the
data was never flushed to disk. You will find that the inode is not
taking any disk space since all it has is a size, there are no disk
blocks allocated for it yet.
</p>
<p>
This same will apply to other metadata only journaling filesystems.
The current linux kernel VM will write out the metadata after 1/60th of
a second and the data after 30 seconds. So the possibility of losing data
when unplugging the power within 30 seconds is quite large. The only 
way of being sure that your data will get to the disk is using fsync in the
program of sync after closing the program.
</p>
    <A name="cbindings">
<h2>
Q: Where can I find C++ bindings for Extended Attributes?
</h2>
<p>
The <a href="http://witme.sourceforge.net/libferris.web/">http://witme.sourceforge.net/libferris.web/</a> has tools for 
accessing Extended Attributes including writing to XFS Extended Attributes.
</p>
    <A name="diskimages">
<h2>
Q: Where can I find a tool to create disk images from a XFS filesystem?
</h2>
<p>
You can find the Partition Image tool at <a href="http://www.partimage.org/">http://www.partimage.org/</a> which can 
create disk images to help speed up cloning systems or making snapshot images for backup.<br>
The XFS support has just been added in version 0.6.0rc3. It was a little 
tested with small partitions (300 MB), and large ones (15 GB), but we need 
more test to be sure this support is stable..<br>
If you need this functionality or like to test, help them out and go there.
</p>
    <A name="redhat">
<h2>
Q: Where can I find Red Hat boot/install disks?
</h2>
<p>
There is a installer ISO for Red Hat 7.2 available on the 
SGI FTP site at <A HREF="ftp://oss.sgi.com/projects/xfs/download/Release-1.1/installer/i386/">
ftp://oss.sgi.com/projects/xfs/download/Release-1.1/installer/i386/</A>.  If you 
need a installer for Red Hat 7.0 or 7.1 for compatibility requirements you can find it under one 
of the testing directories or use an older release.<br>
Installers for Red Hat releases are currently provided by SGI but are not available direclty after release of a new 
Red Hat version. Work on a modified installer begins after the official release. PLEASE BE PATIENT. SGI tests their 
installers and CDs before releasing it. You don't want a installer that eats your filesystem do you?
</p>
    <A name="nativesupport">
<h2>
Q: Where can I find Linux distributions natively support XFS?
</h2>
<p>
Several large and small Linux distributions have XFS support built in:
</p>
<table border=0 cellspacing=2 width="60%">
<tr>
<td valign=top bgcolor="#88ee88">
<b>Distribution:</b></td>
<td valign=top bgcolor="#99cccc">
<b>Support since:</b></td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.mandrakesoft.com">Mandrake Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 8.1</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.suse.com">SuSE Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 8.0</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.gentoo.org">Gentoo Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 1.0</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.slackware.org">Slackware Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 8.1</td>
</tr>

<td valign=top bgcolor="#88ee88">
<A HREF="http://www.jblinux.net">JB Linux</A></td>
<td valign=top bgcolor="#99cccc">
Version 2.0</td>
</tr>
</table>
<p>
If your favorite distribution isn't listed here, let them know that you'd like to see XFS included in their next
release!
</p>

<& xfsTemplate,bottom=>1 &>