xfs
[Top] [All Lists]

Re: xfs_force_shutdown with linux-2.4.6-xfs-07052001?

To: Seth Mos <knuffie@xxxxxxxxx>
Subject: Re: xfs_force_shutdown with linux-2.4.6-xfs-07052001?
From: KELEMEN Peter <Peter.Kelemen@xxxxxxx>
Date: Thu, 2 Aug 2001 23:51:19 +0200
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <4.3.2.7.2.20010801093900.033a2008@xxxxxxxxxxxxx>
Organization: CERN European Laboratory for Particle Physics, Switzerland
References: <E15RebW-0007RT-00@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <E15RebW-0007RT-00@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4.3.2.7.2.20010801093900.033a2008@xxxxxxxxxxxxx>
Reply-to: KELEMEN Peter <Peter.Kelemen@xxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
User-agent: Mutt/1.3.19i
* Seth Mos (knuffie@xxxxxxxxx) [20010801 09:46]:

> That is probably caused by an IO error. I see you are using lvm
> which could be related.  If an IO error occurs the filesystem
> will shutdown to prevent more damage.

Well.  I checked the hard drive (read-only) in another machine, it
reports no errors.  S.M.A.R.T. is happy, no relocated sectors or
raw read, seek or CRC errors.  Nothing.

> This error could be on the device (bad cluster on the disk) or
> something in a software layer like md or lvm going wrong which
> is seen by XFS as a hardware error.  What is actually the lvm
> device. Do you use md or any other software that might
> interfere?  IDE or scsi and what controller and system. How is
> the lvm device constructed.

EIDE, no UDMA.  No MD at all, LVM is pretty straightforward, a
lonely 20G disk sliced into two, sitting in an extended partition.
A big partition is actually the only PV, carrying a VG with six
LVs.  No magic, this is supposed to be a workstation.  Kernel is
tracked CVS, compiled with egcs-1.1.2.  No overclock.

The thing hasn't happened since (knock on wood).

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@xxxxxxx
.+'         `+...+'         `+...+'         `+...+'         `+...+'


<Prev in Thread] Current Thread [Next in Thread>