DocumentCode :
3846216
Title :
Evaluating Stability and Comparing Output of Feature Selectors that Optimize Feature Subset Cardinality
Author :
Petr Somol;Jana Novovicova
Author_Institution :
Institute of Information Theory and Automation of the Czech Academy of Sciences, Prague
Volume :
32
Issue :
11
fYear :
2010
Firstpage :
1921
Lastpage :
1939
Abstract :
Stability (robustness) of feature selection methods is a topic of recent interest, yet often neglected importance, with direct impact on the reliability of machine learning systems. We investigate the problem of evaluating the stability of feature selection processes yielding subsets of varying size. We introduce several novel feature selection stability measures and adjust some existing measures in a unifying framework that offers broad insight into the stability problem. We study in detail the properties of considered measures and demonstrate on various examples what information about the feature selection process can be gained. We also introduce an alternative approach to feature selection evaluation in the form of measures that enable comparing the similarity of two feature selection processes. These measures enable comparing, e.g., the output of two feature selection methods or two runs of one method with different parameters. The information obtained using the considered stability and similarity measures is shown to be usable for assessing feature selection methods (or criteria) as such.
Keywords :
"Area measurement","Size measurement","Stability criteria","Robust stability","Optimization methods","Robustness","Learning systems","Gain measurement","Pattern recognition","Information theory"
Journal_Title :
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publisher :
ieee
ISSN :
0162-8828
Type :
jour
DOI :
10.1109/TPAMI.2010.34
Filename :
5401167
Link To Document :
بازگشت