DocumentCode :
3153638
Title :
An efficient learning algorithm for the backpropagation artificial neural network
Author :
Byrne, P.C.
fYear :
1990
fDate :
1-4 Apr 1990
Firstpage :
61
Abstract :
Two conditions for reducing the number of learning iterations in backpropagation artificial neural networks are introduced. The first condition is to scale the target output so that it falls within a small range (±0.1) of the point at which the slope of the nonlinear activation function of the output node is maximum. This point is 0.5 for the sigmoid function. The second condition is to learn the input patterns selectively, not sequentially, until the error is reduced below the desired limit. Introducing the techniques does not affect the memory retention or generalization capabilities of such networks. The application of these concepts to the classical XOR learning algorithm problem resulted in a reduction in the number of learning iterations by a factor of seven over the results published by D.E. Rumelhart et al. (Parallel Distributed Processing, vol.1, chap.8, Cambridge, MIT Press)
Keywords :
iterative methods; learning systems; neural nets; backpropagation; learning algorithm; learning iterations; learning systems; neural network; nonlinear activation function; sigmoid function; Algorithm design and analysis; Artificial neural networks; Backpropagation algorithms; Computer networks; Equations;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Southeastcon '90. Proceedings., IEEE
Conference_Location :
New Orleans, LA
Type :
conf
DOI :
10.1109/SECON.1990.117770
Filename :
117770
Link To Document :
بازگشت