xfs
[Top] [All Lists]

CVS kernel and RAID5 superblock problems

To: "Linux XFS" <linux-xfs@xxxxxxxxxxx>
Subject: CVS kernel and RAID5 superblock problems
From: "Jeff Duffy" <jeff@xxxxxxxxxx>
Date: 26 Apr 2001 00:00:36 EDT
Reply-to: "Jeff Duffy" <jeff@xxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Greetings,

 I am running the latest (as of Wed Apr 25 23:51:02 EDT 2001) CVS snapshot of
the XFS kernel on Red Hat 7.0 (upgraded w/7.1 packages) using a RAID5 array
of 3 ATA-100 IDE disks on two separate controllers. About two weeks after the
initial build and creation of the XFS filesystem on the md device,  I
experienced a system crash due to power outage. After rebooting the kernel
insists that one disk of the RAID array is bad and will not add it (I am
still running fine on the other two... for now). 

 While I am sure this is not the fault of XFS, I am leery of hunting down
patches to the default kernel that may conflict with the XFS code before
looking for an answer here.

The pertinent messages (from dmesg):

..
raid5 personality registered
raid5: measuring checksumming speed
   8regs     :  1232.400 MB/sec
   32regs    :   824.800 MB/sec
   pII_mmx   :  1894.800 MB/sec
   p5_mmx    :  2420.800 MB/sec
raid5: using function: p5_mmx (2420.800 MB/sec)
md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md.c: sizeof(mdp_super_t) = 4096
autodetecting RAID arrays
(read) hde3's sb offset: 38950528 [events: 00000014]
(read) hdi3's sb offset: 38950528 [events: 0000001f]
(read) hdk3's sb offset: 38950528 [events: 0000001f]
autorun ...
considering hdk3 ...
  adding hdk3 ...
  adding hdi3 ...
  adding hde3 ...
created md0
bind<hde3,1>
bind<hdi3,2>
bind<hdk3,3>
running: <hdk3><hdi3><hde3>
now!
hdk3's event counter: 0000001f
hdi3's event counter: 0000001f
hde3's event counter: 00000014
md: superblock update time inconsistency -- using the most recent one
freshest: hdk3
md: kicking non-fresh hde3 from array!
unbind<hde3,2>
export_rdev(hde3)
md0: max total readahead window set to 1024k
md0: 2 data-disks, max readahead per data-disk: 512k
raid5: device hdk3 operational as raid disk 2
raid5: device hdi3 operational as raid disk 1
raid5: md0, not all disks are operational -- trying to recover array
raid5: allocated 3264kB for md0
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdi3
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdk3
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdi3
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdk3
md: updating md0 RAID superblock on device
hdk3 [events: 00000020](write) hdk3's sb offset: 38950528
md: recovery thread got woken up ...
md0: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...
hdi3 [events: 00000020](write) hdi3's sb offset: 38950528

<Prev in Thread] Current Thread [Next in Thread>