[Top] [All Lists]

Re: file corruption during emacs build on XFS logical volume

To: Linux XFS <linux-xfs@xxxxxxxxxxx>
Subject: Re: file corruption during emacs build on XFS logical volume
From: Sean Neakums <sneakums@xxxxxxxx>
Date: Wed, 02 Jan 2002 23:22:24 +0000
In-reply-to: <1010013019.1281.6.camel@xxxxxxxxxxxxxxxxxxxx> (Steve Lord's message of "02 Jan 2002 17:10:19 -0600")
Mail-followup-to: Linux XFS <linux-xfs@xxxxxxxxxxx>
References: <6u4rm4r53e.fsf@xxxxxxxxxxxxx> <1009995505.14223.9.camel@xxxxxxxxxxxxxxxxxxxx> <6uy9jgpn2x.fsf@xxxxxxxxxxxxx> <6uu1u4pf48.fsf@xxxxxxxxxxxxx> <1010013019.1281.6.camel@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
User-agent: Gnus/5.090004 (Oort Gnus v0.04) Emacs/21.1 (i386-debian-linux-gnu)
begin  Steve Lord quotation:

> On Wed, 2002-01-02 at 16:22, Sean Neakums wrote:
>> begin  Sean Neakums quotation:
>> > begin  Steve Lord quotation:
>> >> I don't think there is much point going back to earlier kernels to
>> >> be honest. I will fire up an emacs build here, on a non-lvm
>> >> partition for starters.
>> >
>> > I went back to my 2.4.14-pre7 CVS pull, and the build went
>> > perfectly on the XFS volume.  But it seems that that kernel
>> > doesn't have preempt applied, so I'll rebuild 2.4.17-xfs without
>> > the patch, to eliminate that possibility.
>> I'm now running a fresh build of 2.4.17, without the preempt patch.
>> emacs reliably fails to build on the XFS volume, and builds
>> successfully on the ext2 volume.
> Doing a non lvm build here, lets see if the that works. I have
> my suspicions that lvm is going to be the magic factor here.

I'm using the LVM as shipped in SGI's CVS tree, which claims to be
"1.0.1-rc4(ish)".  As I recall, that's the version being shippedin
stock 2.4.17.  Is there a newer version or some patches I should be

 /////////////////  |                  | The spark of a pin
<sneakums@xxxxxxxx> |  (require 'gnu)  | dropping, falling feather-like.
 \\\\\\\\\\\\\\\\\  |                  | There is too much noise.

<Prev in Thread] Current Thread [Next in Thread>