Title :
Design interpretable neural network trees through self-organized learning of features
Author :
Qinzhen, X.U. ; Zhao, Qiangfu ; Pei, Wenjiang ; Yang, Luxi ; He, Zhenya
Author_Institution :
Southeast Univ., Nanjing, China
Abstract :
Neural network tree (NNTree) is a modular neural network with the overall structure being a decision tree (DT), and each non-terminal node being an expert neural network (ENN). One advantage of using NNTrees is that they are actually "gray-boxes" because they can be interpreted easily if the number of inputs for each ENN is limited. To design interpretable NNTrees, we have proposed a multiple objective optimization based genetic algorithm. This algorithm, however, is good only for solving problems with binary inputs. In this paper, we propose a method to solve problems with continuous inputs. The basic idea is to find a small number of critical points for each continuous input using self-organized learning, and quantize the input using the critical points. Experimental results with several public databases show that the NNTrees built from the quantized data are much more interpretable, and in most cases they are as good as those obtained from the original data.
Keywords :
decision trees; genetic algorithms; learning (artificial intelligence); self-organising feature maps; critical points; decision trees; expert neural network; genetic algorithm; gray boxes; interpretable neural network trees; multiple objective optimization; nonterminal node; public databases; self organized feature learning; Algorithm design and analysis; Boolean functions; Computational efficiency; Databases; Decision trees; Design optimization; Genetic algorithms; Helium; Neural networks; Neurons;
Conference_Titel :
Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on
Print_ISBN :
0-7803-8359-1
DOI :
10.1109/IJCNN.2004.1380161