http://oss.sgi.com/bugzilla/show_bug.cgi?id=289
------- Additional Comments From olaf@xxxxxxx 2003-19-11 12:58 PDT -------
My guess would be that one of the drives contains bad data, which is
somehow not detected by the raid driver. (Lost redundancy?) Check
whatever logs are available to see if problems are/were reported for
the raid device or disks.
If it is a single disk that's "polluting" your reads, you may be able to
determine which one and remove it from the raid. At that point you should
be able to salvage whatever of your data can be rescued. Basically, you'd
have to find ways to remove and place back each disk in turn _without_
having the raid start rebuilding, then do read-only accesses to check
whether your data is good for that combination of disks. I don't have
enough experience with linux software raid to give concrete instructions
for this.
Note that even if the raid seems to function fine after removing a disk,
you should re-initialize it once you've got your data off. Also note that
if you want to mount the XFS filesystem for the tests, use 'ro,norecovery'
for the mount options: norecovery means no attempt is made to replay the
XFS log.
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.
|