>> Check your disks. I've seen harddisks dying in that way.
>> Suddenly access times are horrible. Test random access I/O and
>> compare times, not sequential - those might still look good.
> [ ... ] When messing around with the BIOS settings (switching
> SATA mode from RAID to IDE) I get quite an obvious message
> stating that the S.M.A.R.T. status of one of the disks is
> BAD. Removing that disk solved the problem. [ ... ]
The suggestion about checking disk error status was good, and
relates to an important but little known detail about SATA vs. SAS
Usually SATA drives are pogrammed to do a large number of firmware
driven retries in case of data errors, hanging the IO subsystem,
while SAS drives, which are expected to be used in RAID sets,
don't, and report errors immediately. So called "Raid Edition"
SATA drives often behave like SAS drives in this respect.
It is often possible but usually quite awkward to change that
setting for ordinary SATA drives.
Disabling extended fw retries is usually a good idea with Linux
anyhow, as it does sw retries anyhow, and very good for drives
used in RAID sets with redundancy, to reduce latency in the case