Title :
Automatic learning rate optimization by higher-order derivatives
Author :
Yu, Xiao-Hu ; Xu, Li-Qun
Author_Institution :
Nat. Commun. Res. Lab., Southeast Univ., Nanjing, China
Abstract :
Automatic optimization of learning rate is a central issue to improving the efficiency and applicability of backpropagation learning. In this paper techniques have been investigated, which explore the first four derivatives of the learning rate of backpropagation error surface. The derivatives are derived from an extended feedforward propagation procedure and can be calculated in an iterative manner. The near-optimal dynamic learning rate is obtained with only a moderate increase in computational complexity at each iteration, scaling like the plain backpropagation algorithm (BPA), but the proposed method achieves rapid convergence and very significant gains in running time savings to at least an order of magnitude as compared with the BPA
Keywords :
backpropagation; conjugate gradient methods; convergence; feedforward neural nets; optimisation; polynomials; backpropagation learning; computational complexity; conjugate gradient method; convergence; dynamic learning rate; feedforward propagation; higher-order derivatives; learning rate optimization; polynomials; Artificial neural networks; Backpropagation algorithms; Computational complexity; Convergence; Cost function; Differential equations; Intelligent systems; Iterative methods; Laboratories; Polynomials;
Conference_Titel :
Neural Networks,1997., International Conference on
Conference_Location :
Houston, TX
Print_ISBN :
0-7803-4122-8
DOI :
10.1109/ICNN.1997.616177