DocumentCode :
3101227
Title :
An Interpretable Neural Network Ensemble
Author :
Hartono, Pitoyo ; Hashimoto, Shuji
Author_Institution :
Future Univ.-Hakodate, Hakodate
fYear :
2007
fDate :
5-8 Nov. 2007
Firstpage :
228
Lastpage :
232
Abstract :
The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.
Keywords :
neural nets; pattern classification; human-plausible manner; interpretable neural network ensemble; neural network classifier; rule extraction; Electronic mail; Humans; Industrial Electronics Society; Multi-layer neural network; Multilayer perceptrons; Neural networks; Neurons; Notice of Violation; Physics; Usability;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Industrial Electronics Society, 2007. IECON 2007. 33rd Annual Conference of the IEEE
Conference_Location :
Taipei
ISSN :
1553-572X
Print_ISBN :
1-4244-0783-4
Type :
conf
DOI :
10.1109/IECON.2007.4460332
Filename :
4460332
Link To Document :
بازگشت