UNF CEN 6070 - Predicting where Faults Can Hide from Testing

Unformatted text preview:

I) Sensitivity analysis estimates the probability that a program location can bide a faiLwe-causing jkdt. It does not require an oracle because correctness is not the issue. Predicting Where Faults Can Hide from Testing JEFFREY VOAS , NASA Langley Research Center LARRY MORELL, Hampton University KEITH MILLER, College of William and Mary T esting seeks to reveal software faults by exe&ting a program and comparing the output expected to the output produced. Exhaustive testing is the only testing scheme that can (in some sense) guarantee correctness. All other testing schemes are based on the assump- tion that successful execution on some in- puts implies (but does not guarantee) suc- cessful execution on other inputs. Because it is well known that some programming faults are very difficult to find with testing, our research focuses on program characteristics that make faults hard to find with random black-box test- ing. Given a piece of code, we try to pre- dict if random black-box testing is likely to reveal any faults in that code. (By “fault,” we mean those that have been comniled I into the code.) A program’s testability is a prediction of its ability to hide faults when the pro- gram is black-box-tested with inputs se- lected randomly from a particular input distribution. You determine testability by the code’s structure and semantics and by an assumed input distribution. Thus, two programs can compute the same function but may have different testabilities. Apro- gram has high testability when it readily reveals faults through random black-box testing; a program with low testability is unlikely to reveal faults through random black-box testing. A program with low testability is dangerous because consider- able testing may make it appear that the program has no faults when in reality it has many. A fault can lie anywhere in a program, so any method of determining testability must take into consideration all places in the code where a fault can occur. Al- though you can use our proposed tech- niques at different granularities, this art- IEEE SOFTWARE 07407459,91,03M3,M341/$01 w30 IEEE 411. read (a,b,c); 2. ifaoothenbegin 3. d := b*b - S*a*c; 4. if d < 0 then 5. x:=0 else 6. x := (-b + trunc(sqrt(d))) div (2 *a) end else 7. xz-cdivb; 8. if (a*x*x + b*x + c = 0) then 9. writeln(x, ’ is an integral solution? else 10. writeln(‘There is no integral solution’) Figure 1. &ample program. cle concentrates on locations that roughl: correspond to single commands in an im perative, procedural language. We expect that any method for deter mining testability will require extensivf analysis, a large amount of computing re sources, or both. However, the potentia benefits for measuring testability are sig nificant. If you can effectively estimab testability, you can gain considerable in sight into four issues important to testing + Where to get the most benefit fron limited testing resources. A module witl low testability requires more testing than ; module with high testability. Testing re. sources can thus be distributed more ef fectively. + When to use some verification tech. nique other than testing. Extremely lov testability suggests that an inordinate amount of testing may be required to gait confidence in the software’s correctness Alternative techniques like proofs of cor- rectness or code review may be more ap- propriate for such modules. + The degree to which testing must bt performed to convince you that a locatior is probably correct. You can use testability to estimate how many tests are necessary to gain desired confidence in the software’s correctness. + Whether the software should be re- written. You may use testability as a guide a _ 1 : 1 1 1 7 1 I , : 1 , - ; : 1 to whether critical software has been suffi- ciently verified. If a piece of critical soft- ware has low testability, you may reject it because too much testing will be required to sufficiently verify a sufficient level of reliability. SENSITIVITY We use the word “sensitivity” to mean a prediction of the probability that a fault will cause a failure in the software at a par- ticular location under a specified input dis- tribution. If a location has a sensitivity of 0.99 under a particular distribution, al- most any input in the distribution that ex- ecutes the location will cause a program failure. If a location has a sensitivity of 0.01, relatively few inputs liom the distribution that execute would cause the program to fail, no matter what faults exist at that location. Sensitivity is clearly related to testability, but the terms are not equiva- lent. Sensitivity focuses on a single location in a program and the effects a fault at that location can have on the program’s I/O behavior. Testability encompasses the whole program and its sensitivi- ties under a given input distribution. Sensitivity analysis is the process of determining the sensitiv- ity of a location in a pro- gram. From the collec- tion of sensitivities over all locations, we deter- mine the program’s test- ability. If the presence of faults in programs guaranteed program failure, every program would be highly testable. But this is not true. To understand why, you must consider the sequence of location executions that a program performs. One method of per- software testing, since you check no out- puts against a specification or oracle. FAULT/FAILURE MODEL If the presence of faults in programs guaranteed program failure, every pro- gram would be highly testable. But this is not true. To understand why, you must consider the sequence of location execu- tions that a program performs. Each set of variable values after the execution of a lo- cation in a computation is called a data state. After executing a fault, the resulting data state might be corrupted; if there is corruption in a data state, infection has oc- curred and the data state contains an error, which we call a “data-state error.” The program in Figure 1 displays an integral


View Full Document

UNF CEN 6070 - Predicting where Faults Can Hide from Testing

Download Predicting where Faults Can Hide from Testing
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Predicting where Faults Can Hide from Testing and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Predicting where Faults Can Hide from Testing 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?