DocumentCode :
328894
Title :
Hierarchical mixtures of experts and the EM algorithm
Author :
Jordan, Michael I. ; Jacobs, Robert A.
Author_Institution :
Dept. of Brain & Cognitive Sci., MIT, Cambridge, MA, USA
Volume :
2
fYear :
1993
fDate :
25-29 Oct. 1993
Firstpage :
1339
Abstract :
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIMs). Learning is treated as a maximum likelihood problem; in particular, we present an expectation-maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
Keywords :
learning (artificial intelligence); maximum likelihood estimation; neural net architecture; expectation-maximization algorithm; experts; generalized linear models; hierarchical mixture model; maximum likelihood problem; neural nets; robot dynamics; statistical model; supervised learning; tree-structured architecture; Biological neural networks; Jacobian matrices; Machine learning algorithms; Mars; Orbital robotics; Partitioning algorithms; Psychology; Supervised learning; Surface fitting; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1993. IJCNN '93-Nagoya. Proceedings of 1993 International Joint Conference on
Print_ISBN :
0-7803-1421-2
Type :
conf
DOI :
10.1109/IJCNN.1993.716791
Filename :
716791
Link To Document :
بازگشت