DocumentCode :
2652224
Title :
Machine-Learning Models for Software Quality: A Compromise between Performance and Intelligibility
Author :
Lounis, Hakim ; Gayed, Tamer ; Boukadoum, Mounir
Author_Institution :
Dept. d´´lnformatique, Univ. du Quebec a Montreal, Montreal, QC, Canada
fYear :
2011
fDate :
7-9 Nov. 2011
Firstpage :
919
Lastpage :
921
Abstract :
Building powerful machine-learning assessment models is an important achievement of empirical software engineering research, but it is not the only one. Intelligibility of such models is also needed, especially, in a domain, software engineering, where exploration and knowledge capture is still a challenge. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances, others give interpretable, intelligible, and "glass-box" models that are very complementary. We consider that the integration of both, in automated decision-making systems for assessing software product quality, is desirable to reach a compromise between performance and intelligibility.
Keywords :
learning (artificial intelligence); software metrics; software quality; automated decision-making systems; machine-learning assessment models; machine-learning models; software engineering; software metrics; software quality; Conferences; Knowledge engineering; Maximum likelihood estimation; Software; Software engineering; assessment models; machine-learning; maintainability; metrics; reusability; software product quality;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Tools with Artificial Intelligence (ICTAI), 2011 23rd IEEE International Conference on
Conference_Location :
Boca Raton, FL
ISSN :
1082-3409
Print_ISBN :
978-1-4577-2068-0
Electronic_ISBN :
1082-3409
Type :
conf
DOI :
10.1109/ICTAI.2011.155
Filename :
6103446
Link To Document :
بازگشت