Title :
Improved convergence for output scaling of a feedforward network with linear output nodes
Author :
Soloway, Donald I.
Author_Institution :
NASA Langley Res. Center, Hampton, VA, USA
Abstract :
The author presents an augmentation to the gradient descent learning algorithm for a feedforward neural network to improve the convergence of learning a desired output having an absolute value magnitude greater than one. The enhancement to the standard backpropagation algorithms is simple to implement in existing code, computationally efficient, and reduces the number of training cycles. With these features, this algorithm saves time in training a network that requires output scaling
Keywords :
convergence; feedforward neural nets; learning (artificial intelligence); absolute value magnitude; convergence; feedforward network; gradient descent learning algorithm; linear output nodes; neural network; output scaling; standard backpropagation algorithms; training cycles; Backpropagation algorithms; Code standards; Computational efficiency; Computer architecture; Convergence; Error correction; Joining processes; NASA; Neural networks; Postal services;
Conference_Titel :
Neural Networks, 1993., IEEE International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
0-7803-0999-5
DOI :
10.1109/ICNN.1993.298693