Jonathan Day wrote:
What I have not (yet) seen is any work on relating the
results. Is a bug in the design? The implementation?
Some combination thereof? Is something correctly
written but not functioning because something it
depends on isn't working correctly?
Currently, you can get some idea (kernel didn't
build, machine couldn't reboot, or if the system
crashes during the tests, crash info etc. Looking
into whether the cause is a design bug or an
implementation bug is likely beyond automation.
It would even be useful if we could cross-reference
some of the benchmarks with the Linux graphing
project, so that we could see how the complexity of
I believe they do (ping Martin for details) have some
plans to graph stuff, and possibly info could be sucked
out of the data/results provided to feed other people's
needs.
Test suites are necessary. Test suites are great.
Anyone working on a test suite deserves many kudos and
much praise. Test suites that are relatable enough
that you can see the same problem from different
angles -- those are worth their printout weight in
gold.
Yeah. :).
thanks,
Nivedita
|