Linux Test Project HOWTO 10 October 2000 Nate Straz Abstract This document explains some of the more in depth topics of the Linux Test Project and related testing issues. It does not cover basic installation procedures. See the INSTALL and README files in the tarball for that information. 1 Preface This document was written to help bring the community up to speed on the ins and outs of the Linux Test Project. 1.1 Copyright Copyright (c) 2000 by SGI, Inc. Please freely copy and distribute (sell or give away) this document in any format. It's requested that corrections and/or comments be fowarded to the document maintainer. You may create a derivative work and distribute it provided that you: * Send your derivative work (in the most suitable format such as sgml) to the LDP (Linux Documentation Project) or the like for posting on the Internet. If not the LDP, then let the LDP know where it is available. * License the derivative work with this same license or use GPL. Include a copyright notice and at least a pointer to the license used. * Give due credit to previous authors and major contributors. If you're considering making a derived work other than a translation, it's requested that you discuss your plans with the current maintainer. 1.2 Disclaimer Use the information in this document at your own risk. I disavow any potential liability for the contents of this document. Use of the concepts, examples, and/or other content of this document is entirely at your own risk. All copyrights are owned by their owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Naming of particular products or brands should not be seen as endorsements. You are strongly recommended to take a backup of your system before major installation and backups at regular intervals. 2 Introduction 2.1 What is the Linux Test Project? The Linux Test Project (LTP) is an effort to create a set of tools and tests to verify the functionality and stability of the Linux kernel. We hope this will support Linux development by making unit testing more complete and minimizing user impact by building a barrier to keep bugs from making it to the user. 2.2 What is wrong with the current testing model? The Linux development community utilizes two important (some out argue most important) testing techniques in its normal operations: Design and Code Inspections. The intent of LTP is to support this by giving developers an ever growing set of tools to help identify any operational problems in their code that may be missed by human review. One of the toughest categories of problems to catch with inspection is that of interaction of features. With a continuously improving set of tests and tools, developers can get an indication of whether their changes may have broken some other functionality. There is no such thing as a perfect test base. It is only useful it if keeps up with new and changing functionality, and if it actually gets used. 2.3 Are you doing benchmarking? Not at this time. We are more interested in functional, regression, and stress testing the Linux kernel. Benchmarking may be workable to compare the performance among kernel versions. 2.4 Are you doing standards testing? No, we are leaving that to the Linux Standards Base (LSB). See the Linux Standards Base [http://www.linuxbase.org/||web site] for more information. 3 Structure The basic building block of the test project is a test case that consists of a single action and a verification that the action worked. The result of the test case is usually restricted to PASS/FAIL. A test program is a runnable program that contains one or more test cases. Test programs often understand command line options which alter their behavior. The options could determine the amount of memory tested, the location of temporary files, the type of network packet used, or any other useful parameter. Test tags are used to pair a unique identifier with a test program and a set of command line options. Test tags are the basis for test suites. 4 Writing Tests Writing a test case is a lot easier than most people think. Any code that you write to examine how a part of the kernel works can be adapted into a test case. All that is needed is a way to report the result of the action to the rest of the world. There are several ways of doing this, some more involved than others. 4.1 Exit Style Tests Probably the simplest way of reporting the results of a test case is the exit status of your program. If your test program encounters unexpected or incorrect results, exit the test program with a non-zero exit status, i.e. exit(1). Conversely, if your program completes as expected, return a zero exit status, i.e. exit(0). Any test driver should be able to handle this type of error reporting. If a test program has multiple test cases you won't know which test case failed, but you will know the program that failed. 4.2 Formatted Output Tests The next easiest way of reporting the results is to write the results of each test case to standard output. This allows for the testing results to be more understandable to both the tester and the analysis tools. When the results are written in a standard way, tools can be used to analyze the results. 5 Testing Tools The Linux Test Project has not yet decided on a "final" test harness. We have provided a simple solution with pan to make due until a complete solution has been found/created that compliments the Linux kernel development process. Several people have said we should use such and such a test harness. Until we find we need a large complex test harness, we will apply the KISS concept. 5.1 Pan pan is a simple test driver. It will take a list of test tags and command lines and run them. pan has the ability to run the test sequentially or randomly and in parallel while capturing test output and cleaning up orphaned processes. pan can also be nested to create very complex test environments. A pan file contains a list of tests to run. The format of a pan file is as follows: testtag testprogram -o one -p two other command line options # This is a comment. It is a good idea to describe the test # tags in your pan file. Tests programs can have different # behaviors depending on the command line options so it is # helpful to describe what each test tag is meant to verify or # provoke. # Some more test cases mm01 mmap001 -m 10000 # 40 Mb mmap() test. # Creates a 10000 page mmap, touches all of the map, sync's # it, and munmap()s it. mm03 mmap001 -i 0 -I 1 -m 100 # repetitive mmapping test. # Creates a one page map repetitively for one minute. dup02 dup02 # Negative test for dup(2) with bad fd kill09 kill09 # Basic test for kill(2) fs-suite01 pan -e -a fs-suite01.zoo -n fs-suite01 -f runtest/fs # run the entire set of file system tests For more information on pan see the man page doc/man1/pan.1. 5.2 Scanner scanner is a results analysis tool that understands the rts style output which pan generates by default. It will produce a table summarizing which tests passed and which failed. 5.3 The Quick-hitter Package Many of the tests released use the Quick-hitter test package to perform tasks like create and move to a temporary directory, handle some common command line parameters, loop, run in parallel, handle signals, and clean up. There is an example test case, doc/examples/quickhit.c, which shows how the quick-hitter package can be used. The file is meant to be a supplement to the documentation, not a working test case. Use any of the tests in tests/ as a template. 6 To Do There are a lot of things that still need to be done to make this a complete kernel testing system. The following sections will discuss some of the to do items in detail. 6.1 Configuration Analysis While the number of configuration options for the Linux kernel is seen as a strength to developers and users alike, it is a curse to testers. To create a powerful automated testing system, we need to be able to determine what the configuration on the booted box is and then determine which tests should be run on that box. The Linux kernel has hundreds of configuration options that can be set to compile the kernel. There are more options that can be set when you boot the kernel and while it is running. There are also many patches that can be applied to the kernel to add functionality or change behavior. 6.2 Result Comparison A lot of testing will be done in the life of the Linux Test Project. Keeping track of the results from all the testing will require some infrastructure. It would be nice to take that output from a test machine, feed it to a program and receive a list of items that broke since the last run on that machine, or were fixed, or work on another test machine but not on this one. 7 Contact information and updates URL: http://oss.sgi.com/projects/ltp/ email: owners-ltp@oss.sgi.com mailing list: ltp@oss.sgi.com list archive: http://oss.sgi.com/projects/ltp/mail-threaded/ Questions and comments should be sent to the LTP mailing list at ltp@oss.sgi.com. To subscribe, send mail to majordomo@oss.sgi.com with "subscribe ltp" in the body of the message. The source is also available via CVS. See the web site for a web interface and check out instructions. 8 Glossary Test IEEE/ANSI([footnote] Kit, Edward, Software Testing in the Real World: Improving the Process. P. 82. ACM Press, 1995.) : (i) An activity in which a system or component is executed under specified conditions, the results are observed or record, and an evaluation is made of some aspect of the system or component. (ii) A set of one or more test cases. Test Case A test assertion with a single result that is being verified. This allows designations such as PASS or FAIL to be applied to a single bit of functionality. A single test case may be one of many test cases for testing the complete functionality of a system. IEEE/ANSI: (i)A set of test inputs, execution conditions, and expected results developed for a particular objective. (ii) The smallest entity that is always executed as a unit, from beginning to end. Test Driver A program that handles the execution of test programs. It is responsible for starting the test programs, capturing their output, and recording their results. Pan is an example of a test driver. Test Framework A mechanism for organizing a group of tests. Frameworks may have complex or very simple API's, drivers and result logging mechanisms. Examples of frameworks are TETware and DejaGnu. Test Harness A Test harness is the mechanism that connects a test program to a test framework. It may be a specification of exit codes, or a set of libraries for formatting messages and determining exit codes. In TETware, the tet_result() API is the test harness. Test Program A single invokable program. A test program can contain one or more test cases. The test harness's API allows for reporting/analysis of the individual test cases. Test Suite A collection of tests programs, assertions, cases grouped together under a framework. Test Tag An identifier that corresponds to a command line which runs a test. The tag is a single word that matches a test program with a set of command line arguments.