DocumentCode :
315241
Title :
Automatic learning rate optimization by higher-order derivatives
Author :
Yu, Xiao-Hu ; Xu, Li-Qun
Author_Institution :
Nat. Commun. Res. Lab., Southeast Univ., Nanjing, China
Volume :
2
fYear :
1997
fDate :
9-12 Jun 1997
Firstpage :
1072
Abstract :
Automatic optimization of learning rate is a central issue to improving the efficiency and applicability of backpropagation learning. In this paper techniques have been investigated, which explore the first four derivatives of the learning rate of backpropagation error surface. The derivatives are derived from an extended feedforward propagation procedure and can be calculated in an iterative manner. The near-optimal dynamic learning rate is obtained with only a moderate increase in computational complexity at each iteration, scaling like the plain backpropagation algorithm (BPA), but the proposed method achieves rapid convergence and very significant gains in running time savings to at least an order of magnitude as compared with the BPA
Keywords :
backpropagation; conjugate gradient methods; convergence; feedforward neural nets; optimisation; polynomials; backpropagation learning; computational complexity; conjugate gradient method; convergence; dynamic learning rate; feedforward propagation; higher-order derivatives; learning rate optimization; polynomials; Artificial neural networks; Backpropagation algorithms; Computational complexity; Convergence; Cost function; Differential equations; Intelligent systems; Iterative methods; Laboratories; Polynomials;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks,1997., International Conference on
Conference_Location :
Houston, TX
Print_ISBN :
0-7803-4122-8
Type :
conf
DOI :
10.1109/ICNN.1997.616177
Filename :
616177
Link To Document :
بازگشت