[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

EVMS v1.2.0 + XFS 1.2: RAID5: always unclean after proper shutdown (Debian 3.0)



Hi there,

I'm a lil' bit puzzled about this strange scenario:

I've installed XFS 1.2 and EVMS 1.2.0 on a Debian/GNU Linux 3.0 (i386) box 
running 2.4.19. I've created a RAID5 storage region from four 9 gig SCSI 
drives, upon which I created a container, and inside this region I've created 
individual regions which finally make up the logical volumes. I put XFS on 
these logical volumes.

After a clean shutdown I will always see messages like the following in 
syslog:

    kernel: evms: md core: [md0, level=5] raid array is not clean -- 
        starting background reconstruction

This usually took only moments, but after I had added two more SCSI drives 
(with IDs below the IDs of the existing drives, in case that matters) the 
whole thing became MUCH more time-consuming:

kernel: evms: md core: [md0, level=5] raid array is not clean -- starting 
background reconstruction
kernel: evms: md raid5: raid5_run: device sdf operational as raid disk 3
kernel: evms: md raid5: raid5_run: device sde operational as raid disk 2
kernel: evms: md raid5: raid5_run: device sdd operational as raid disk 1
kernel: evms: md raid5: raid5_run: device sdc operational as raid disk 0
kernel: evms: md raid5: raid5_run: raid set md0 not clean; reconstructing 
parity
kernel: evms: md raid5: RAID5 conf printout:
kernel: evms: md raid5:  --- rd:4 wd:4 fd:0
kernel: evms: md raid5:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sdc
kernel: evms: md raid5:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sdd
kernel: evms: md raid5:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:sde
kernel: evms: md raid5:  disk 3, s:0, o:1, n:3 rd:3 us:1 dev:sdf

#cat /proc/evms/mdstat
Enterprise Volume Management System: MD Status
Personalities : [evms_linear] [evms_raid0] [evms_raid1] [evms_raid5] 
md0 : active evms_raid5 sdf[3] sde[2] sdd[1] sdc[0]
      26674560 blocks
      35765568 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      [=====>...............]  resync = 27.6% (2461276/8891520) recovery = 
27.6% (2461276/8891520) finish=103.1min speed=1036K/sec


Any idea what could be going wrong?!

Thanks,

Ralf


-- 
   L I N U X       .~.
  The  Choice      /V\
   of a  GNU      /( )\
  Generation      ^^-^^