DocumentCode :
1727757
Title :
Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports
Author :
Ciupa, Ilinca ; Meyer, Bertrand ; Oriol, Manuel ; Pretschner, Alexander
Author_Institution :
Dept. of Comput. Sci., ETH Zurich, Zurich
fYear :
2008
Firstpage :
157
Lastpage :
166
Abstract :
The usual way to compare testing strategies, whether theoretically or empirically, is to compare the number of faults they detect. To ascertain definitely that a testing strategy is better than another, this is a rather coarse criterion: shouldn´t the nature of faults matter as well as their number? The empirical study reported here confirms this conjecture. An analysis of faults detected in Eiffel libraries through three different techniques-random tests, manual tests, and user incident reports-shows that each is good at uncovering significantly different kinds of faults. None of the techniques subsumes any of the others, but each brings distinct contributions.
Keywords :
program testing; software fault tolerance; Eiffel libraries; faults detection; manual tests; random tests; software faults; software testing; user incident reports; Automatic testing; Computer science; Contracts; Fault detection; Manuals; Reliability engineering; Software libraries; Software reliability; Software testing; Vehicle crash testing; empirical studies; fault classification; object oriented software; testing strategies;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Software Reliability Engineering, 2008. ISSRE 2008. 19th International Symposium on
Conference_Location :
Seattle, WA
ISSN :
1071-9458
Print_ISBN :
978-0-7695-3405-3
Electronic_ISBN :
1071-9458
Type :
conf
DOI :
10.1109/ISSRE.2008.18
Filename :
4700320
Link To Document :
بازگشت