> The bottom line is that it's very hard to do good comparisons that are
> useful in the general case.
It has always amazed me watching people go about benchmarking. I should
have a blog called "you're doing it wrong" or something.
Personally, I use benchmarks to validate what I already believe to be true.
So before I start I have a predicition as to what the answer should be,
based on my understanding of the system being measured. Back when I
was doing this a lot, I was always within a factor of 10 (not a big
deal) and usually within a factor of 2 (quite a bit bigger deal).
When things didn't match up that was a clue that either
- the benchmark was broken
- the code was broken
- the hardware was broken
- my understanding was broken
If you start a benchmark and you don't know what the answer should be,
at the very least within a factor of 10 and ideally within a factor of 2,
you shouldn't be running the benchmark. Well, maybe you should, they
are fun. But you sure as heck shouldn't be publishing results unless
you know they are correct.
This is why lmbench, to toot my own horn, measures what it does. If go
run that, memorize the results, you can tell yourself "well, this machine
has sustained memory copy bandwidth of 3.2GB/sec, the disk I'm using
can read at 60MB/sec and write at 52MB/sec (on the outer zone where I'm
going to run my tests), it does small seeks in about 6 milliseconds,
I'm doing sequential I/O, the bcopy is in the noise, the blocks are big
enough that the seeks are hidden, so I'd like to see a steady 50MB/sec
or so on a sustained copy test".
If you have a mental model for how the bits of the system works you
can decompose the benchmark into the parts, predict the result, run
it, and compare. It'll match or Lucy, you have some 'splainin to do.
Larry McVoy lm at bitmover.com http://www.bitkeeper.com