DocumentCode
65016
Title
Learning Understandable Neural Networks With Nonnegative Weight Constraints
Author
Chorowski, J. ; Zurada, J.M.
Author_Institution
Dept. of Math. & Comput. Sci., Univ. of Wroclaw, Wroclaw, Poland
Volume
26
Issue
1
fYear
2015
fDate
Jan. 2015
Firstpage
62
Lastpage
69
Abstract
People can understand complex structures if they relate to more isolated yet understandable concepts. Despite this fact, popular pattern recognition tools, such as decision tree or production rule learners, produce only flat models which do not build intermediate data representations. On the other hand, neural networks typically learn hierarchical but opaque models. We show how constraining neurons´ weights to be nonnegative improves the interpretability of a network´s operation. We analyze the proposed method on large data sets: the MNIST digit recognition data and the Reuters text categorization data. The patterns learned by traditional and constrained network are contrasted to those learned with principal component analysis and nonnegative matrix factorization.
Keywords
document image processing; learning (artificial intelligence); matrix decomposition; neural nets; pattern classification; principal component analysis; text analysis; MNIST digit recognition data; Reuters text categorization data; neural network learning; nonnegative matrix factorization; nonnegative weight constraints; pattern recognition; principal component analysis; Biological neural networks; Data models; Educational institutions; Neurons; Principal component analysis; Training; Vectors; Multilayer perceptron; pattern analysis; supervised learning; white-box models; white-box models.;
fLanguage
English
Journal_Title
Neural Networks and Learning Systems, IEEE Transactions on
Publisher
ieee
ISSN
2162-237X
Type
jour
DOI
10.1109/TNNLS.2014.2310059
Filename
6783731
Link To Document