DocumentCode
396712
Title
Relating Bayesian learning to training in recurrent networks
Author
Spiegel, Rainer
Author_Institution
Dept. of Comput., London Univ., UK
Volume
2
fYear
2003
fDate
20-24 July 2003
Firstpage
908
Abstract
It is demonstrated that a recurrent neural network relying on an error correcting learning algorithm and a localist coding scheme is able to converge to a solution that would be expected from Bayesian learning. This is possible even without implementing Bayes theorem and without assigning prior probabilities to the model.
Keywords
Bayes methods; error correction; learning (artificial intelligence); recurrent neural nets; Bayesian learning; error correcting learning algorithm; localist coding; neural network; recurrent networks; Bayesian methods; Educational institutions; Error correction; Intelligent networks; Neural networks; Probability; Psychology; Recurrent neural networks; Statistics; Training data;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 2003. Proceedings of the International Joint Conference on
ISSN
1098-7576
Print_ISBN
0-7803-7898-9
Type
conf
DOI
10.1109/IJCNN.2003.1223811
Filename
1223811
Link To Document