DocumentCode :
3783354
Title :
Learning in neural networks by normalized stochastic gradient algorithm: local convergence
Author :
V. Tadic;S. Stankovic
Author_Institution :
Autom. Control Lab., Mihailo Pupin Inst., Belgrade, Yugoslavia
fYear :
2000
Firstpage :
11
Lastpage :
17
Abstract :
In this paper, a normalized stochastic gradient algorithm is proposed for learning in feedforward neural networks. By using a new methodology based on the martingale convergence results, asymptotic properties of the algorithm are analyzed. It is proved that, in general, the sequence of the algorithm states converges with probability one to the set of zeroes of the gradient of the criterion function locally on the event where it is bounded. Then, these results are applied to learning in multilayer perceptrons.
Keywords :
"Intelligent networks","Neural networks","Stochastic processes","Convergence","Feedforward neural networks","Algorithm design and analysis","Backpropagation algorithms","Multilayer perceptrons","Multi-layer neural network","Automatic control"
Publisher :
ieee
Conference_Titel :
Neural Network Applications in Electrical Engineering, 2000. NEUREL 2000. Proceedings of the 5th Seminar on
Print_ISBN :
0-7803-5512-1
Type :
conf
DOI :
10.1109/NEUREL.2000.902375
Filename :
902375
Link To Document :
بازگشت