Title :
What makes finite-state models more (or less) testable?
Author :
Owen, David ; Menzies, Tim ; Cukic, Bojan
Author_Institution :
Lane Dept. of Comput. Sci., West Virginia Univ., Morgantown, WV, USA
Abstract :
This paper studies how details of a particular model can effect the efficacy of a search for detects. We find that if the test method is fixed, we can identity classes of software that are more or less testable. Using a combination of model mutators and machine learning, we find that we can isolate topological features that significantly change the effectiveness of a defect detection tool. More specifically, we show that for one defect detection tool (a stochastic search engine) applied to a certain representation (finite state machines), we can increase the average odds of finding a defect from 69% to 91%. The method used to change those odds is quite general and should apply to other defect detection tools being applied to other representations.
Keywords :
finite state machines; learning (artificial intelligence); program testing; defect detection tool; finite-state model testability; machine learning; model mutators; software testing; topological features; Automata; Computer science; Costs; Design for experiments; Machine learning; Mechanical factors; Search engines; Software engineering; Software testing; Stochastic processes;
Conference_Titel :
Automated Software Engineering, 2002. Proceedings. ASE 2002. 17th IEEE International Conference on
Print_ISBN :
0-7695-1736-6
DOI :
10.1109/ASE.2002.1115019