DocumentCode :
1330108
Title :
Learning a Metric for Code Readability
Author :
Buse, Raymond P L ; Weimer, Westley R.
Author_Institution :
Univ. of Virginia, Charlottesville, VA, USA
Volume :
36
Issue :
4
fYear :
2010
Firstpage :
546
Lastpage :
558
Abstract :
In this paper, we explore the concept of code readability and investigate its relation to software quality. With data collected from 120 human annotators, we derive associations between a simple set of local code features and human notions of readability. Using those features, we construct an automated readability measure and show that it can be 80 percent effective and better than a human, on average, at predicting readability judgments. Furthermore, we show that this metric correlates strongly with three measures of software quality: code changes, automated defect reports, and defect log messages. We measure these correlations on over 2.2 million lines of code, as well as longitudinally, over many releases of selected projects. Finally, we discuss the implications of this study on programming language design and engineering practice. For example, our data suggest that comments, in and of themselves, are less important than simple blank lines to local judgments of readability.
Keywords :
human factors; software quality; automated defect reports; code changes; code readability; defect log messages; human notions; local code features; programming language design; software quality; FindBugs.; Software readability; code metrics; machine learning; program understanding; software maintenance;
fLanguage :
English
Journal_Title :
Software Engineering, IEEE Transactions on
Publisher :
ieee
ISSN :
0098-5589
Type :
jour
DOI :
10.1109/TSE.2009.70
Filename :
5332232
Link To Document :
بازگشت