[Top] [All Lists]

Re: LVM and XFS in 1.0.1-PR1 and usability in production

To: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>, Steve Lord <lord@xxxxxxx>
Subject: Re: LVM and XFS in 1.0.1-PR1 and usability in production
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Fri, 22 Jun 2001 10:13:02 +0200
Cc: linux-xfs <linux-xfs@xxxxxxxxxxx>
In-reply-to: <3B32F76A.16229BA5@xxxxxxxxxxxxxxxx>
References: <200106211608.f5LG8m713919@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
At 09:44 22-6-2001 +0200, Simon Matter wrote:

One question regarding kernel RPM's:
I've built new RPM's from kernel-2.4.5-0.2.9_SGI_XFS_20010613.src.rpm
and from kernel-2.4.2-SGI_XFS_1.0.1_PR1.src.rpm. I patched the them to
support the new Promise Ultra100 TX2 chipset and I disabled
REISERFS_CHECK to make reiserfs usable as well. I'm not using it but if
it's compiled in, it should be done right. Beside those two changes I
didn't touch enything in the .spec or any .config file. After booting
the PR1 kernel I found out that LVM support has gone. I was checking the
.config files against Release-1.0 and found out that quite alot changes
were there. Why? Are there so many XFS related changes or did it just
happen by chance. I enabled LVM in all Intel .configs, rebuilt tonight
and tested now. Seems to work fine. Is there a problem with LVM in PR1
or is it as (un)safe to use than in 1.0?

The intial rpms were spun by Russel who just took the src.rpm from redhat rawhide and forked all the patches in so that it compiled. He did not alter the .config file in any way. That is probably the reason it got overlooked. Russel is now doing this in his spare time which is probably the reason he overlooked it.

About usability and stability:
I'm installing a 200GB server and a second one as mirror. The system
will serve samba to ~50 users. System comes on SoftRAID1, data on
SoftRAID5 with LVM on top and XFS on the server, ext2 on the mirror
because of snapshotting. I've built a similar setup at home on my old
Pentium200MMX with 64M ram and it was absolutely stable under heavy
load. The problem with the new servers is that they will be located in
London and I'm sitting in Basel/Switzerland. I did not find any big
problem with XFS until now and I really don't want ext2. Yesterday you
> We use it here, but generally we have upgraded beyond the 1.0 release and
> run later kernels. The machine hosting the oss website was recently upgraded
> to the 1.0 release, but was getting an uptime of about 24 hours before it
> hung. We have bumped it up to a more recent version of XFS, the one in the
> test 1.0.1 rpms here
Will it crash after a week or so? Did I missunderstand? Am I very crazy?
If I'm doing it what version should I take, 1.0 or 1.0.1-PR1. Is it a
big difference regarding stability?

What Steve ment was that the original 1.0 release (2.4.2) crashed within a day. The later CVS kernels and the rawhide kernels are a lot better on stability. If oss.sgi.com can move 350GB in 3 days you can too.

The problem is that we get bitten more by general 2.4 problems and not really any XFS problems. The 2.4 kernel is going the right direction and getting better. I think that when it reaches 2.4.10 you can probably throw everything and kitchensink against it and it will survive. By then most distributions will even include it as the default kernel.

But for now the CVS and rawhide kernels hold up fairly wel.
Test it for your environment and demands and see if it plays out well.

For lighter loads you are safe. On higher loads and highmem machines it is better to avoid 2.4 for now. And I mean 2.4 not XFS perse.

Every program has two purposes one for which
it was written and another for which it wasn't
I use the last kind.

<Prev in Thread] Current Thread [Next in Thread>