XFS hangs and freezes with LSI 9265-8i controller on high i/o
Michael Monnerie
michael.monnerie at is.it-management.at
Fri Jun 15 04:52:17 CDT 2012
Am Freitag, 15. Juni 2012, 10:16:02 schrieb Dave Chinner:
> So, the average service time for an IO is 10-16ms, which is a seek
> per IO. You're doing primarily 128k read IOs, and maybe one or 2
> writes a second. You have a very deep request queue: > 512 requests.
> Have you tuned /sys/block/sda/queue/nr_requests up from the default
> of 128? This is going to be one of the causes of your problems - you
> have 511 oustanding write requests, and only one read at a time.
> Reduce the ioscehduer queue depth, and potentially also the device
> CTQ depth.
Dave, I'm puzzled by this. I'd believe that a higher #req. would help
the block layer to resort I/O in the elevator, and therefore help to
gain throughput. Why would 128 be better than 512 here?
And maybe Matthew could profit from limiting the vm.dirty_bytes, I've
seen when this value is too high the server stucks on lots of writes,
for streaming it's better to have this smaller so the disk writes can
keep up and delays are not too long.
> Oh, I just noticed you are might be using CFQ (it's the default in
> dmesg). Don't - CFQ is highly unsuited for hardware RAID - it's
> hueristically tuned to work well on sngle SATA drives. Use deadline,
> or preferably for hardware RAID, noop.
Wouldn't deadline be better with a higher rq_qu size? As I understand
it, noop only groups adjacent I/Os together, while deadline does a bit
more and should be able to get bigger adjacent I/O areas because it
waits a bit longer before a flush.
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20120615/2a51e2ca/attachment.sig>
More information about the xfs
mailing list