On Wed, 28 Aug 2002 09:46, Scott McDermott wrote:
>Will SGI or anyone else be rolling up a recent kernel
> any time soon which properly merges their XFS tree with recent
> 2.4-tree changes?
The CVS for 2.4.x usually tracks 2.4.x stable releases; as the SGI guys are
putting all their effort into keeping up with the 2.5.x kernel tree. If you
are feeling up to it you could attempt to merge the 2.4.19 split patches into
2.4.20pre.
>
> 2. Does anyone have any idea how close the next release of XFS is (i.e.,
> 1.2 or what have you)?
There was an email on the mailing list last week (I think) from Steve that
explained that there are a couple of outstanding issues and corner cases that
need to be addressed before another "official" release.
> 3. am I better off using XFS from CVS or the XFS-1.1 release?
Personally - the CVS is the more up-to-date; however, like everything test it
first before production. In my experience the CVS tree is very stable
compared to other CVS trees because only tidy-ups and fixes go into it.
AFAIK all development is going into the 2.5.x tree.
>I assume
> XFS-1.1 has been regressed and there is some known list of bugs
> somewhere perhaps? It looks to me like several important changes are
> in CVS version when compared to the release-1.1.
>
> 4. does anyone currently use SAMBA with ACL support on Linux-2.4 with
> XFS, in production use, that can relate how stable this combination
> is? In particular for Windows clients, and with oplock support?
I'd have to let someone else answer this for you.
> 5. There have in the past (several months ago anyways) been
> lockup/uptime problems with Linux-2.4 NFSv3 exports of XFS volumes.
> Is anyone using this combination in production presently, and if so
> do they have any problems?
If I remember correctly these were fixed or at least disapeared. Again - I'm
running this but not in a heavy use environment. I'll have to let someone
else answer from their experiences.
> 6. Am I correct that XFS filesystems do not work off a Linux-2.4 native
> software RAID volume, correct?
I am running XFS on md SW RAID0,1 & 5. The issue is that in XFS the log
writes can be of differing sizes and the RAID5 code flushes buffers, whenever
this occurs. It runs but not at full performance. The v2 log format is an
attempt to fix this problem as it will force writes to the size specified
when mkfs.xfs. The got-ya with v2 logs at the moment is that in some cases
using the default size deadlocks. There have been emails on the list
explaining how to increase the size to get around this.
> Thank you for any answers to these questions.
No worries
--
Adrian Head
(Public Key available on request.)
|