DocumentCode :
1547745
Title :
Learning efficiency of redundant neural networks in Bayesian estimation
Author :
Watanabe, Sumio
Author_Institution :
Precision & Intelligence Lab., Tokyo Inst. of Technol., Yokohama, Japan
Volume :
12
Issue :
6
fYear :
2001
fDate :
11/1/2001 12:00:00 AM
Firstpage :
1475
Lastpage :
1486
Abstract :
This paper proves that the Bayesian stochastic complexity of a layered neural network is asymptotically smaller than that of a regular statistical model if it contains the true distribution. We consider a case when a three-layer perceptron with M input units, H hidden units and N output units is trained to estimate the true distribution represented by the model with H0 hidden units and prove that the stochastic complexity is asymptotically smaller than (1/2) {H0 (M+N)+R} log n where n is the number of training samples and R is a function of H-H0, M, and N that is far smaller than the number of redundant parameters. Since the generalization error of Bayesian estimation is equal to the increase of stochastic complexity, it is smaller than (1/2 n) {H0 (M+N)+R} if it has an asymptotic expansion. Based on the results, the difference between layered neural networks and regular statistical models is discussed from the statistical point of view
Keywords :
Bayes methods; function approximation; generalisation (artificial intelligence); learning (artificial intelligence); multilayer perceptrons; probability; Bayesian learning; Kullback information; free energy; function approximation; generalization error; multilayer perceptron; nonidentifiable model; probability; redundant neural networks; statistical model; Artificial neural networks; Bayesian methods; Intelligent networks; Machine learning; Maximum likelihood estimation; Multilayer perceptrons; Neural networks; Probability distribution; State estimation; Stochastic processes;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.963783
Filename :
963783
Link To Document :
بازگشت