xfs
[Top] [All Lists]

Re: RedHat Rawhide + XFS rpm available for testing.

To: Andrew Klaassen <ak@xxxxxxx>
Subject: Re: RedHat Rawhide + XFS rpm available for testing.
From: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>
Date: Tue, 19 Jun 2001 08:37:01 +0200
>received: from mobile.sauter-bc.com (unknown [10.1.6.21]) by basel1.sauter-bc.com (Postfix) with ESMTP id 890CE57306; Tue, 19 Jun 2001 08:42:36 +0200 (CEST)
Cc: linux-xfs@xxxxxxxxxxx
Organization: Sauter AG, Basel
References: <3B299D51.85A6B204@xxxxxxxxxxx> <20010618210211.H2209@xxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
I guess that the reason for the messages is that the kernel tries to
drive the disks too hard and they (or the controller) complain and
the kernel then reduces the transfer mode or whatever to a lower
level. Saw the similar with Promise Ultra100TX2 and SoftRAID5 on 4*60G
IBM drive, when I put heavy load on the not yet synced RAID.

Simon

Andrew Klaassen schrieb:
> 
> On Fri, Jun 15, 2001 at 12:29:53AM -0500,
> Russell Cattelan wrote:
> 
> > Ok after a few false starts I finally have a running
> >
> > RedHat rawhide kernel + XFS.
> >
> > The XFS patches are in sync with the devel tree.
> >
> > All the rpms can be gotten at:
> >
> > ftp://oss.sgi.com/projects/xfs/download/testing/RHrawhide/
> >
> > These have not been tested extensively so use at your own
> > risk.
> 
> Using these rpms, I saw this on bootup:
> 
> md: syncing RAID array md1
> md: minimum _guaranteed_ reconstruction speed: 100 KB/sec/disc.
> md: using maximum available idle IO bandwith (but not more than 100000 
> KB/sec) for reconstruction.
> md: using 508k window, over a total of 74280960 blocks.
> ide/host0/bus1/target0/lun0/part3 [events: 00000014](write) 
> ide/host0/bus1/target0/lun0/part3's sb offset: 74280960
> ide/host2/bus0/target0/lun0/part3 [events: 00000014](write) 
> ide/host2/bus0/target0/lun0/part3's sb offset: 74280960
> hda: dma_intr: status=0x51 { DriveReady SeekComplete Error }
> hda: dma_intr: error=0x84 { DriveStatusError BadCRC }
> hda: dma_intr: status=0x51 { DriveReady SeekComplete Error }
> hda: dma_intr: error=0x84 { DriveStatusError BadCRC }
> hda: dma_intr: status=0x51 { DriveReady SeekComplete Error }
> hda: dma_intr: error=0x84 { DriveStatusError BadCRC }
> hda: dma_intr: status=0x51 { DriveReady SeekComplete Error }
> hda: dma_intr: error=0x84 { DriveStatusError BadCRC }
> ide/host2/bus0/target1/lun0/part3 [events: 00000014](write) 
> ide/host2/bus0/target1/lun0/part3's sb offset: 74280960
> ide/host2/bus1/target0/lun0/part3 [events: 00000014](write) 
> ide/host2/bus1/target0/lun0/part3's sb offset: 74280960
> ide/host2/bus1/target1/lun0/part3 [events: 00000014](write) 
> ide/host2/bus1/target1/lun0/part3's sb offset: 74280960
> .
> ... autorun DONE.
> VFS: Mounted root (ext2 filesystem) readonly.
> (etc.)
> 
> This is with a CMD648 (rev. 1) controller on an ASUS CUBX (rev
> D) board.
> 
> Anybody know if these errors ("DriveReady SeekComplete Error",
> "DriveStatusError BadCRC") are serious?  They only occur on
> bootup, and they occur consistently at this point during bootup.
> They have occured identically and consistently on two separate
> machines, with two different drive sets and two different BIOS
> revisions in them.  They do not occur with the kernel-2.4.2-2
> provided with RH7.1.
> 
> And if they are serious, who should I inform about them?
> 
> Andrew Klaassen

-- 
Simon Matter              Tel:  +41 61 695 57 35
Fr.Sauter AG / CIT        Fax:  +41 61 695 53 30
Im Surinam 55
CH-4016 Basel             [mailto:simon.matter@xxxxxxxxxxxxxxxx]



<Prev in Thread] Current Thread [Next in Thread>