http://oss.sgi.com/bugzilla/show_bug.cgi?id=791
glaucon.hunn@xxxxxxxxx changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution| |INVALID
------- Additional Comments From glaucon.hunn@xxxxxxxxx 2008-09-27 15:53 CST
-------
Hrm, I think its the cheap SiI 3114 sata card actually. I rebooted yesterday to
let it sit in memtest again, and after everything looked good I rebooted into
linux only the raid did not come back up
md: Autodetecting RAID arrays.
md: invalid superblock checksum on sdb1
md: sdb1 does not have a valid v0.90 superblock, not importing!
md: invalid raid superblock magic on sdd1
md: sdd1 does not have a valid v0.90 superblock, not importing!
md: Scanned 4 and added 2 devices.
md: autorun ...
md: considering sdc1 ...
md: adding sdc1 ...
md: adding sda1 ...
md: created md0
md: bind<sda1>
md: bind<sdc1>
md: running: <sdc1><sda1>
raid5: device sdc1 operational as raid disk 0
raid5: device sda1 operational as raid disk 4
raid5: not enough operational devices for md0 (4/6 failed)
after powering off and letting it sit for a bit and turning it on everything
comes back up (note i removed sdd1 from the raid so I could try copying some
data off)
md: Autodetecting RAID arrays.
md: invalid raid superblock magic on sdd1
Bad version number 0.0 on sdd1
md: sdd1 does not have a valid v0.90 superblock, not importing!
md: Scanned 6 and added 5 devices.
md: autorun ...
md: considering sdf1 ...
md: adding sdf1 ...
md: adding sde1 ...
md: adding sdc1 ...
md: adding sdb1 ...
md: adding sda1 ...
md: created md0
md: bind<sda1>
md: bind<sdb1>
md: bind<sdc1>
md: bind<sde1>
md: bind<sdf1>
md: running: <sdf1><sde1><sdc1><sdb1><sda1>
raid5: device sdf1 operational as raid disk 1
raid5: device sdc1 operational as raid disk 0
raid5: device sdb1 operational as raid disk 3
raid5: device sda1 operational as raid disk 4
raid5: allocated 6291kB for md0
raid5: raid level 6 set md0 active with 4 out of 6 devices, algorithm 2
Unfortunately I dont have another computer with 5 sata ports free, but I just
ordered another card (using the promise chipset this time...) So ill let ya know
what happens when that comes.
as for the .config I've tried debian's stock kernel and with a 2.6.27-rc7. Ill
post the .config for the 2.6.27-rc7 that i built. As for lvm, im not using lvm
for anything other than it uses it but here is the config
# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name mandy
PV Size 2.72 TB / not usable 256.00 KB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 713347
Free PE 0
Allocated PE 713347
PV UUID cfuKtx-aiwq-3wXe-GVYh-gHVu-Vu5K-cwPFX4
# lvdisplay
--- Logical volume ---
LV Name /dev/mandy/codex
VG Name mandy
LV UUID EvNadF-3sbt-jrUn-rnZd-v2zw-oAII-ZNad2R
LV Write Access read/write
LV Status available
# open 0
LV Size 2.72 TB
Current LE 713347
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
Also I have 6 of these in a raid 6:
Device Model: WDC WD7500AACS-00ZJB0
Im closing the bug because the hardware just seems to fishy at this point, will
reopen if i still have problems when I move it to a different computer next week
Or feel free to reopen it if it still smells like an xfs bug somewhere, Im happy
to try patches or what not
--
Configure bugmail: http://oss.sgi.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.
|