On 21 May 2010 07:25, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
> Dave Chinner put forth on 5/20/2010 11:14 PM:
>> I only ever use the noop scheduler with XFS these days. CFQ has been
>> a steaming pile of ever changing regressions for the past 4 or 5
>> kernel releases, so i stopped using it. Besides, XFS is often 10-15%
>> faster on no-op for the same workload, anyway...
> IIRC the elevator sits below the FS in the stack, and has a tighter
> relationship to the block device driver and physical storage subsystem than
> to the FS. I have one box with a 7.2K 500GB WD drive and a sata_sil
> controller that doesn't support NCQ. Without NCQ due to no controller
> support or ATA_horkage_NCQ blacklisted drives, the deadline and anticipatory
> (now removed from the kernel IIRC) elevators yield vastly superior
> performance under load compared to CFQ or noop.
> Noop fits well with good hardware RAID, either local machine PCI/x/e RAID
> card or straight FC HBA talking to a SAN array controller. CFQ just gets in
> the way with good hardware. In some testing I've done with FC HBAs and
> target LUNs on IBM FasTt and Nexsan SAN arrays, deadline has shown a tiny
> advantage over noop with a few synthetic tests. This testing was performed
> on SLED 10 and Debian Etch guests atop VMWare ESX 3 at night on weekends
> when load across the ESX blade farm was near zero, but it was still done in
> a virtual environment. On bare hardware, I'm not sure one would get the
> same results. Anyway, the deadline elevator gave such little advantage over
> noop, I'd still recommend noop on good hardware due to zero CPU overhead.
> Deadline has a few fancy tricks so it will always eat more CPU, even though
> it's a modest amount.
> I'd sum the elevator choice up this way: If you have a good storage
> hardware and driver combo such as fast SATA disks with good NCQ, or just
> about any SCSI, SAS, RAID, or SAN setup, go with noop. For lesser
> hardware/drivers, use deadline (i.e. lacking or crappy NCQ, or on laptops
> due to the slow 4200/5400 rpm drives, even if they do have good NCQ).
> I agree with Dave that CFQ isn't all that great, and in my testing it's even
> worse when used with Linux guests on ESX than it is on bare metal.
> Caveat: I'm no expert, and I don't do storage subsystem performance testing
> all day long. I'm just reporting my first hand experience. YMMV and all
> the normal disclaimers apply.
> xfs mailing list
Thanks for the answers.
I do value my data a lot (that's why I changed from Windows some years
ago), and even though this is a laptop with battery protection I keep
crashing and hard locking it because I like always to use -rcX kernels
and to fool around with lots of dangerous stuff/settings/etc.
Actually that is one of the reasons I stick with XFS instead of going
to EXT4 or the like. I've been using and torturing XFS for a couple of
years now and I NEVER suffered any corruption, and only had a couple
of times where unimportant data loss happened, but it was completely
expected because I was an ass. I only lost the data that was unsynched
during the last minute though.
I forgot to say that I have a SATA-I 5400rpm hard drive, it does
support NCQ and since this is a laptop there is no RAID or similars.
I've been running a few tests with bonnie++ and hdparm. hdparm reports
my bare hard drive speed as 67 Mb/s for read. With bonnie++ the
maximum I can get is 52 Mb/s with noop and cfq and deadline only gives
me 48 Mb/s. This is not bad at all. noop is a tad faster in all the
tests and the only thing it performs worse is in read latency,
although read throughput appears to be the same.
So it is agreed that CFQ sucks right now. I'll continue my testing but
now with proper daily use to see which is better, deadline or noop.