Since a few people brought this up off-list, let me make some
notes on my specific XFS deployments ...
1. Kernel 2.4
I have _never_ used the XFS backport to kernel 2.4. Frankly,
I don't trust it. Not because of XFS, but because of kernel
2.4.
I have only used the official XFS releases for kernel 2.4,
largely those for Red Hat 7.x. In fact, I kept deploying
only Red Hat Linux 7.3 with XFS until late last year (once
Fedora Core 3 came out), tapping FedoraLegacy.ORG for
updates. Again, this means I'm back at kernel 2.4.20 -- and
I really never trusted newer 2.4 kernels anyway!
I have not had the NFS issues others have complained about,
and I've had a real crutch on xfsdump for backups including
ACL information, as well as quota support.
2. Kernel 2.6
With Fedora Core 3, I started deploying XFS. I was very
disappointed when Red Hat forked Red Hat Enterprise Linux 4
development and did not bring XFS over. I think it was a
huge mistake and Red Hat could offer a lot to XFS if they had
to maintain it equally with Ext3 under RHEL. Again, I will
assert it is in their best interested to do so.
With Fedora Core 3 I have quotas, NFS, ACLs and, now,
SELinux. Fedora Core 3 should be supported until
mid-December when Fedora Core 5 is currently planned for Test
2. Reality will probably dictate that FC5T2 slip to early
next year -- and even then, with Fedora Core 3's stability
and popularity, I see FedoraLegacy support continuing it for
some time (unlike Fedora Core 2 or 4).
3. LVM/MD Usage -- I limit mine to volume slicing
Let me start by saying that I'm a huge fan of volume
management. I use both LVM and LVM2 for flexible, on-line
additions/modifications of logical volumes. In a nutshell, I
largely use it to slice my disks with more flexibility --
reserving space, create new volumes as necessary and the
occassional expansion (although I typically try to stick to
new mounts/symlinks).
But with that said, let it be known that I don't trust LVM
and especially not LVM2 with snapshots, more complex resizing
and definitely not any RAID operations. I do not trust
DeviceMapper (DM) with either LVM2 or EMVS right now. Why?
All I keep reading is about is race condition after race
condition after race condition.
And in each case, it's not limited to XFS.
And when it comes to MD, I really avoid it. I always have.
I've seen a lot of people talk about how software RAID is
better, faster, etc... I've seen people state that it allows
them to use different disk controllers and other hardware,
and not be tied to a vendor. They also claim its more
flexible and gives them more options. While I believe they
are sincere, I've just had a different set of hardware RAID
experiences.
First off, I've limited myself to only 3Ware and select LSI
Logic (including former Mylex) products over the last 5
years. 3Ware uses an ASIC-driven "storage switch" and I
have only deployed LSI Logic (and former Mylex) products that
are XScale (which is based on StrongARM). These are very,
very high performing -- able to move a lot of data with not
only little CPU overhead, but more importantly, without the
extensive use and duplication of data streams through the
CPU-memory interconnect. I.e., it's not the XORs that get
you, but the duplicated data streams tying up the
interconnect that data services could be using. [ Same
reason why hardware switches/routers are better networking
equipment than PCs -- these "storage switch - I/O processors"
are the same. ] Their on-board RAID intelligence is
self-contained meaning their drivers are simple, GPL block
drivers.
Secondly, I've also had excellent "forward product" volume
compatibility -- especially with 3Ware of 3+ generations over
5 years, full support moving from older to newer, far, far
better and longer than MD (let alone LVM/LVM2). And many
people have never seen 3Ware's 3DM/3DM2 tools for
administration and monitoring, they are much easier to deploy
and have saved my butt in several cases. LSI's tools are
getting better too.
So it is this abstraction of RAID into hardware that removes
the multiple layers that often cause the "race conditions"
between LVM-MD and other kernel-level operations. This is
not just an issue for XFS, not just an issue for Ext3 and
Linux in general, but many other OSes as well.
Which is why I have been deploying XFS for a long time,
provided I "do my homework," alongside Ext3. All the issues
I heard about off-list have surrounded configurations that
are an issue with Ext3 as well.
--
Bryan J. Smith | Sent from Yahoo Mail
mailto:b.j.smith@xxxxxxxx | (please excuse any
http://thebs413.blogspot.com/ | missing headers)
|