[BACK]Return to faq.mp CVS log [TXT][DIR] Up to [Development] / ltp-website / mp

File: [Development] / ltp-website / mp / faq.mp (download)

Revision 1.1, Fri Dec 29 16:50:15 2000 UTC (16 years, 10 months ago) by nstraz
Branch: MAIN
CVS Tags: HEAD

These are content files for use with "mp," a tool for creating web pages
on oss.sgi.com using the global templates.

 <FONT FACE="ARIAL NARROW, HELVETICA" SIZE="5"><B>LTP FAQ</B></FONT>
 <p>
 <i>Our FAQ is a bit unorganized at this point.  We're adding questions as they pop up.  When topic areas
 become clear, we'll reorder the questions into topic areas.</i>
 <P>
 <FONT FACE="ARIAL NARROW, HELVETICA">
 <!-- questions section -->
 
 <p>
 <a href="#Q1"><b>What is LTP ?</b></a><br>
 <a href="#Q2"><b>Linux already has a successful testing model.  How will this project benefit Linux further?</b></a><br>
 <a href="#Q3"><b>What is in the LTP, so far. </b></a><br>
 <a href="#Q4"><b>How do I build this?</b></a><br>
 <a href="#Q5"><b>How do I run this?  How does it work?</b></a><br>
 <a href="#Q6"><b>How do I analyze the results?</b></a><br>
 <a href="#Q7"><b>What if a test program reports a failure?</b></a><br>
 <a href="#Q8"><b>What is a test?</b></a><br>
 <a href="#Q8"><b>Are you doing benchmarking?</b></a><br>
 <a href="#Q8"><b>Are you doing standards testing?</b></a><br>
 
 
 
 <hr noshade size=1>
 
 <!-- answers section -->
 <a name="Q1"></a>
 <h3>What is LTP ?</h3>
 LTP is the Linux Test Project.  LTP is a project that aims to develop a set of
 tools and tests to verify the functionality and stability of the Linux kernel
 (including regression tests).  We hope this will support Linux development by
 making unit testing more complete and minimizing user impact by building a
 barrier to keep bugs from making it to the user.
 
 <p>
 <a name="Q2"></a>
 <h3>Linux already has a successful testing model.  How will this project benefit
 Linux further?</h3>
 The Linux development community utilizes two important (some would argue most
 important) testing techniques in it's normal operations: Design and Code
 Inspections.  The intent of LTP is to support this by giving developers an ever
 growing set of tools to help identify any operational problems in their code
 that may be missed by human review.  One of the toughest categories of problems
 to catch with inspection is that of interaction of features.  With a
 continuously improving set of tests and tools, developers can get an indication
 of whether their changes may have broken some other functionality.
 <p>
 There is no such thing as a perfect test base.  It's only useful if it keeps up
 with new and changing functionality, and if it actually gets used.
 
 <p>
 <a name="Q3"></a>
 <h3>What is in the LTP, so far? </h3>
 The first release of code from SGI for LTP was a set of tools for testing file
 systems.  Since that first release, a group of tests we call <i>Quickhit</i>
 tests have been released.  These are simple tests aimed at running through a few
 execution paths in a number of system calls.  Watch the news link for updates on
 the content of the LTP releases.
 
 <p>
 <a name="Q4"></a>
 <h3>How do I build this?</h3>
 
 We've made a decision to not put effort into a grand build process in favor of a
 minimal makefile approach.  We did this because the project is just starting and
 we don't want to impose an "SGI approach" in favor of a more open source
 approach decided more by open source contributors.  For now, from the root of
 the tree, a simple 'make' should do it.  Send your build errata to
 ltp@oss.sgi.com.
 
 <p>
 <a name="Q5"></a>
 <h3>How do I run this?  How does it work?</h3>
 <b>Simple Answer:</b> <code>./runalltests.sh</code>
 <p><b>Hard Answer:</b> Any way you want to.  We have provided some example
 scripts and files that you can use as a starting point, but it is only a
 starting point.  The tests can be run individually or with the help of a test
 driver.  We have included a test driver called "pan" that is easy to use.  
 <p>
 A problem that needs to be addressed is how to store different command lines and
 determining when to run them.  A couple examples are, should a bigmem test be
 run on a 128Mb system?  Should a file system test that requires 2Gb of scratch
 space be run on a system without such a test space?  These two examples
 highlight a couple of issues that need to be addressed in regard to the runtime
 framework for tests.
 <p>
 <a name="Q6"></a>
 <h3>How do I analyze the results?</h3>
 Our philosophy on writing functional tests is that each test program is
 responsible for setting up an environment for testing the target functionality,
 performing an operation using the target functionality, and then analyzing
 whether the functionality performed according to expectations and reporting the
 results.  Reporting the results can take a couple of different forms.  
 
 <p>
 The simplest form is the exit status of the program.  If a test hits a scenario
 where it finds unexpected or improper results, calling <font
 face="courier">exit()</font> with a non-zero exit status would flag the caller
 that the test failed for some reason.  Conversely, if the test completes with
 the functionality performing as expected, a zero exit would flag the caller that
 the test passed.  Summarizing: <font face="courier"> exit(0)==PASS,
 exit(!0)==FAIL</font>.  Additionally, the test program could report some sort of
 information on standard output indicating why the test passed or failed.
 
 <p>
 Another form of results reporting used within SGI is a hybrid of the <font
 face="courier">exit()</font> style reporting.  It still uses the exit status for
 reporting, but incorporates a standard format for output from standard output of
 the test program.  This accomplishes two things: allows the reporting of the
 functional testing results to be more understandable for both humans and a
 computer analysis tool and allows more than one test case result to be reported
 from within one test program.  This is in contrast to a pure exit status
 analysis where a test program may contain N test cases and one test case fails,
 the test program returns a non-zero value.  It appears that the whole test
 fails.  Compare this to a scenario where a test reports for each test case a
 PASS or FAIL token on standard output.
 
 <p>
 For quick clarity, take this example:
 
 <font face="courier">
<pre>
$ ./dup02
dup02       1  PASS : dup(-1) Failed, errno=9 : Bad file descriptor
dup02       2  PASS : dup(1500) Failed, errno=9 : Bad file descriptor
</pre></font>
 
 dup02 is a test program that tests a couple of scenarios for the dup() call.
 Each of the test cases call dup() in an effort to be sure dup() responds with a
 correct errno value after deliberately calling dup() with bad arguments.  In
 both cases above, dup() is called with non-existent (1500) or undefined (-1)
 file descriptor values.  In both cases, dup() should respond with an errno
 indicating a 'Bad File Descriptor'.  In both cases, the test reports that dup()
 responded as expected by printing a PASS token.
 
 <p>
 Additionally, the exit status should be zero, indicating all the test cases
 passed.  Accessing the exit status of the test program can be reported
 immediately after running the test program by printing the $? variable.
 
 <font face="courier"><pre>
 $ echo $?
 0
 </pre></font>
 
 <p>
 Finally, we can analyze the results of the dup02 test program on two fronts.  We
 can see from the two PASS lines, that both test cases in the dup02 program
 passed.  We can also see from the exit status that the dup02 program as a whole
 passed.
 
 <p>
 <a name="Q7"></a>
 <h3>What if a test program reports a failure?</h3>
 
 After a failure report, analysis consists of three parts: determining what test
 case reported the failure, determining if the reported failure was really a
 failure, and comparing results with a passing baseline.
 <p>
 Determining what test case is reporting a failure is sometimes a challenge.  In
 some cases, in a multiple test case test program, previous test cases can cause
 failure in subsequent test cases.  If a test is well written, the output on
 standard output can be useful.
 <p>
 Once the test case the "failure" is occuring under is isolated, determining
 whether the "failure" is really a failure needs to be determined.  If a test is
 poorly written and certain environmental conditions are not handled properly, a
 false failure may occur.
 <p>
 Comparing the failure on the system under test against a baseline system is
 important.  If the test fails on a system but it is unknown if it ever passed on
 any other system, it is not possible to determine if the problem is a new
 failure.  Comparing how the functionality under test performs on the baseline
 against how it acts on the test system is the most important method for
 determining how the "failure" occured.
 <p>
 <a name="Q8"></a>
 <h3>What is a test?</h3>
 Take a look at the LTP Glossary at <a href="glossary.html">http://oss.sgi.com/projects/ltp/glossary.html</a>.  It covers
 some of the basic testing definitions, including 'test'.
 <p>
 In software testing, the word 'test' has become overloaded and increasingly
 tough to define.  In operating system testing, we might be able to categorize
 tests into a few different areas: functional (regression), duration, stress
 (load), and performance.
 
 <p>
 <a name="Q9"></a>
 <h3>Are you doing benchmarking?</h3>
 Not at this time.  We're more interested in functional, regression, and stress
 testing the Linux kernel.  Benchmarking may be workable to compare the
 performace among Linux versions.
 
 <p>
 <a name="Q10"></a>
 <h3>Are you doing standards testing?</h3>
 No, we're leaving that to Linux Standard Base (LSB).  Check out 
 <a href="http://www.freestandards.org/">http://www.freestandards.org/</a>.
 
 </font>
 <IMG src="/images/dot_clear.gif" WIDTH="400" HEIGHT="1">
 </TD>
 </TR>
 </TABLE>
 
 <P>
 <CENTER>
 <!---- Virtual Footer ---->
 
     <TABLE WIDTH="400" CELLPADDING="0" CELLPADDING="0" BORDER="0">
 <TR>
   <TD  ALIGN="RIGHT">
     <FONT FACE="Helvetica, Arial" SIZE="-1"><a
 	href="../../about/system.html">about this site</a> &nbsp;|&nbsp; <A
 	href="http://www.sgi.com/company_info/privacy.html">privacy policy</A></FONT>
   </TD>
   <TD  ALIGN="CENTER">
     <FONT FACE="Helvetica, Arial">
       |
     </FONT>
   </TD>
   <TD  ALIGN="LEFT">
     <FONT FACE="Helvetica, Arial" SIZE="-1"><A HREF="mailto:owner-ltp@oss.sgi.com">owner(s) of project ltp</A></FONT>
   </TD>
 </TR>
 <TR>
   <TD  ALIGN="RIGHT">
     <FONT FACE="Helvetica, Arial" SIZE="-2"><A HREF="http://www.sgi.com/company_info/copyright.html">Copyright &copy; 1999 Silicon Graphics, Inc.</A> All rights reserved.</FONT>
 
   </TD>
   <TD ALIGN="CENTER">
     <FONT FACE="Helvetica, Arial">
       |
     </FONT>
   </TD>
   <TD  ALIGN="LEFT">
     <FONT FACE="Helvetica, Arial" SIZE="-2"><A HREF="http://www.sgi.com/company_info/trademarks/">Trademark Information</A></FONT>
   </TD>
 </TR>
 </TABLE>
 
 
 </BODY>
 </HTML>