DocumentCode :
1010763
Title :
An examination of fault exposure ratio
Author :
Malaiya, Yashwant K. ; Von Mayrhauser, Anneliese ; Srimani, Pradip K.
Author_Institution :
Dept. of Comput. Sci., Colorado State Univ., Fort Collins, CO, USA
Volume :
19
Issue :
11
fYear :
1993
fDate :
11/1/1993 12:00:00 AM
Firstpage :
1087
Lastpage :
1094
Abstract :
The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability
Keywords :
program testing; software reliability; detectability profile; fault density; fault exposure ratio; logarithmic model; per-fault hazard rate; predictive capability; software reliability; software testing; Debugging; Density measurement; Fault detection; Hazards; Helium; Neural networks; Predictive models; Software reliability; Software testing; USA Councils;
fLanguage :
English
Journal_Title :
Software Engineering, IEEE Transactions on
Publisher :
ieee
ISSN :
0098-5589
Type :
jour
DOI :
10.1109/32.256855
Filename :
256855
Link To Document :
بازگشت