![]() |
![]() ![]() ![]() ![]() ![]() ![]() |
|||||||||||||||||||
|
LTP FAQ Our FAQ is a bit unorganized at this point. We're adding questions as they pop up. When topic areas become clear, we'll reorder the questions into topic areas.
What is LTP ?
There is no such thing as a perfect test base. It's only useful if it keeps up
with new and changing functionality, and if it actually gets used.
Hard Answer: Any way you want to. We have provided some example
scripts and files that you can use as a starting point, but it is only a
starting point. The tests can be run individually or with the help of a test
driver. We have included a test driver called "pan" that is easy to use.
A problem that needs to be addressed is how to store different command lines and
determining when to run them. A couple examples are, should a bigmem test be
run on a 128Mb system? Should a file system test that requires 2Gb of scratch
space be run on a system without such a test space? These two example highlight
a couple of issues that need to be addressed in regard to the runtime framework
for tests.
The simplest form is the exit status of the program. If a test hits a scenario
where it finds unexpected or improper results, calling exit() with a non-zero exit status would flag the caller
that the test failed for some reason. Conversely, if the test completes with
the functionality performing as expected, a zero exit would flag the caller that
the test passed. Summarizing: exit(0)==PASS,
exit(!0)==FAIL. Additionally, the test program could report some sort of
information on standard output indicating why the test passed or failed.
Another form of results reporting used within SGI is a hybrid of the exit() style reporting. It still uses the exit status for
reporting, but incorporates a standard format for output from standard output of
the test program. This accomplishes two things: allows the reporting of the
functional testing results to be more understandable for both humans and a
computer analysis tool and allows more than one test case result to be reported
from within one test program. This is in contrast to a pure exit status
analysis where a test program may contain N test cases and one test case fails,
the test program returns a non-zero value. It appears that the whole test
fails. Compare this to a scenario where a test reports for each test case a
PASS or FAIL token on standard output.
For quick clarity, take this example:
Additionally, the exit status should be zero, indicating all the test cases
passed. Accessing the exit status of the test program can be reported
immediately after running the test program by printing the $? variable.
Finally, we can analyze the results of the dup02 test program on two fronts. We
can see from the two PASS lines, that both test cases in the dup02 program
passed. We can also see from the exit status that the dup02 program as a whole
passed.
Determining what test case is reporting a failure is sometimes a challenge. In
some cases, in a multiple test case test program, previous test cases can cause
failure in subsequent test cases. If a test is well written, the output on
standard output can be useful.
Once the test case the "failure" is occuring under is osolated, determining
whether the "failure" is really a failure needs to be determined. If a test is
poorly written and certain environmental conditions are not handled properly, a
false failure may occur.
Comparing the failure on the system under test against a baseline system is
important. If the test fails on a system but it is unknown if it ever passed on
any other system, its not possible to determine if the problem is a new failure.
Comparing how the functionality under test performs on the baseline against how
it acts on the test system is the most important method for determining how the
"failure" occured.
In software testing, the word 'test' has become overloaded and increasingly
tough to define. In operating system testing, we might be able to categorize
tests into a few different areas: functional (regression), duration, stress
(load), and performance.
|
|||||||||||||||||||
| about this site | privacy policy | | | owner(s) of project ltp |
| Copyright © 1999 Silicon Graphics, Inc. All rights reserved. | | | Trademark Information |