DocumentCode :
3322289
Title :
Improving the learning rate of back-propagation with the gradient reuse algorithm
Author :
Hush, D.R. ; Salas, J.M.
Author_Institution :
Dept. of Electr. Eng. & Comput. Eng., New Mexico Univ., Albuquerque, NM, USA
fYear :
1988
fDate :
24-27 July 1988
Firstpage :
441
Abstract :
A simple method for improving the learning rate of the backpropagation algorithm is described and analyzed. The method is referred to as the gradient reuse algorithm (GRA). The basic idea is that ingredients which are computed using backpropagation are reused several times until the resulting weight updates no longer lead to a reduction in error. It is shown that convergence speedup is a function of the reuse rate, and that the reuse rate can be controlled by using a dynamic convergence parameter.<>
Keywords :
artificial intelligence; learning systems; neural nets; artificial intelligence; backpropagation; dynamic convergence parameter; gradient reuse algorithm; learning rate; neural nets; reuse rate; Artificial intelligence; Learning systems; Neural networks;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1988., IEEE International Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/ICNN.1988.23877
Filename :
23877
Link To Document :
بازگشت