DocumentCode :
2599528
Title :
What makes finite-state models more (or less) testable?
Author :
Owen, David ; Menzies, Tim ; Cukic, Bojan
Author_Institution :
Lane Dept. of Comput. Sci., West Virginia Univ., Morgantown, WV, USA
fYear :
2002
fDate :
2002
Firstpage :
237
Lastpage :
240
Abstract :
This paper studies how details of a particular model can effect the efficacy of a search for detects. We find that if the test method is fixed, we can identity classes of software that are more or less testable. Using a combination of model mutators and machine learning, we find that we can isolate topological features that significantly change the effectiveness of a defect detection tool. More specifically, we show that for one defect detection tool (a stochastic search engine) applied to a certain representation (finite state machines), we can increase the average odds of finding a defect from 69% to 91%. The method used to change those odds is quite general and should apply to other defect detection tools being applied to other representations.
Keywords :
finite state machines; learning (artificial intelligence); program testing; defect detection tool; finite-state model testability; machine learning; model mutators; software testing; topological features; Automata; Computer science; Costs; Design for experiments; Machine learning; Mechanical factors; Search engines; Software engineering; Software testing; Stochastic processes;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automated Software Engineering, 2002. Proceedings. ASE 2002. 17th IEEE International Conference on
ISSN :
1938-4300
Print_ISBN :
0-7695-1736-6
Type :
conf
DOI :
10.1109/ASE.2002.1115019
Filename :
1115019
Link To Document :
بازگشت