Dave Chinner put forth on 5/20/2010 11:14 PM:
> I only ever use the noop scheduler with XFS these days. CFQ has been
> a steaming pile of ever changing regressions for the past 4 or 5
> kernel releases, so i stopped using it. Besides, XFS is often 10-15%
> faster on no-op for the same workload, anyway...
IIRC the elevator sits below the FS in the stack, and has a tighter
relationship to the block device driver and physical storage subsystem than
to the FS. I have one box with a 7.2K 500GB WD drive and a sata_sil
controller that doesn't support NCQ. Without NCQ due to no controller
support or ATA_horkage_NCQ blacklisted drives, the deadline and anticipatory
(now removed from the kernel IIRC) elevators yield vastly superior
performance under load compared to CFQ or noop.
Noop fits well with good hardware RAID, either local machine PCI/x/e RAID
card or straight FC HBA talking to a SAN array controller. CFQ just gets in
the way with good hardware. In some testing I've done with FC HBAs and
target LUNs on IBM FasTt and Nexsan SAN arrays, deadline has shown a tiny
advantage over noop with a few synthetic tests. This testing was performed
on SLED 10 and Debian Etch guests atop VMWare ESX 3 at night on weekends
when load across the ESX blade farm was near zero, but it was still done in
a virtual environment. On bare hardware, I'm not sure one would get the
same results. Anyway, the deadline elevator gave such little advantage over
noop, I'd still recommend noop on good hardware due to zero CPU overhead.
Deadline has a few fancy tricks so it will always eat more CPU, even though
it's a modest amount.
I'd sum the elevator choice up this way: If you have a good storage
hardware and driver combo such as fast SATA disks with good NCQ, or just
about any SCSI, SAS, RAID, or SAN setup, go with noop. For lesser
hardware/drivers, use deadline (i.e. lacking or crappy NCQ, or on laptops
due to the slow 4200/5400 rpm drives, even if they do have good NCQ).
I agree with Dave that CFQ isn't all that great, and in my testing it's even
worse when used with Linux guests on ESX than it is on bare metal.
Caveat: I'm no expert, and I don't do storage subsystem performance testing
all day long. I'm just reporting my first hand experience. YMMV and all
the normal disclaimers apply.