I am in an fc6 x86_64 environment on an lvm2 ext3 fs (mkfs'd to a segate
750)... this is the host os. The target os is a CLFS environment, which is
built on a areca 1280 hw raid6 (arcmsr driver compiled from source) 4
seagates (750s) array; all of the above on an AMD Opteron based tyan
the target os has three primary partitions:
The lvm partition has one pv (being the areca array); the lvm has multiple
lv's.... a few lv's are straight xfs, other lv's are
LUKS w/ the mapped drive being xfs.
I'v read the doc at: http://web.bii.a-star.edu.sg/~james/100TB/14TB/#f
...it's useful, but...
In the middle of read/write operations on the xfs formatted areca array I
experience the following repetative message (from /var/log/messages):
sd 3:0:1:0: reservation conflict
sd 3:0:1:0: SCSI error: return code = 0x00070018
end_request: I/O error, dev sdx, sector 681896097
The "SCSI error: return code" consistently returns "0x00070018". The
"end_request: I/O error, dev sdx, sector" returns various sector values.
The opteron produces significant io wait during these errors; the read/write
performance is marginal (in general).
I have run xfs_check on all the parititions (LUKS/xfs and xfs)... no
errors. I have scanned the array volume (on the hardware level)... no
errors. I have run badblocks -nsv ...on most of the parititions (LUKS/xfs
and xfs)... no error.
Any and all thoughts/suggestions will be warmly received....
View this message in context:
Sent from the linux-xfs mailing list archive at Nabble.com.