DocumentCode :
2595434
Title :
Fast backpropagation learning using steep activation functions and automatic weight reinitialization
Author :
Cho, Tai-Hoon ; Conners, Richard W. ; Araman, P.A.
Author_Institution :
Spatial Data Anal. Lab., Virgina Polytech. Inst. & State Univ., Blacksburg, VA, USA
fYear :
1991
fDate :
13-16 Oct 1991
Firstpage :
1587
Abstract :
Several backpropagation (BP) learning speed-up algorithms that employ the gain parameter, i.e., steepness of the activation function, are examined to determine the effect of increased gain on learning time. It was shown by simulation that although these algorithms can converge faster than the standard BP learning algorithm on some problems, they can be unstable in convergence, i.e., they frequently fail to converge within a finite time. One main reason for this divergence is inappropriate setting of initial weights in the network. To overcome this instability, an automatic random reinitialization of the weights is proposed when convergence speed becomes very slow. BP learning algorithms with this weight reinitialization and larger initial gain (around 2 or 3) were found to be much faster and more stable in convergence than those without weight reinitialization
Keywords :
convergence; learning systems; neural nets; parallel algorithms; automatic weight reinitialization; backpropagation learning algorithm; convergence; gain parameter; learning systems; neural nets; steep activation functions; Artificial neural networks; Backpropagation algorithms; Convergence; Data analysis; Feedforward neural networks; Laboratories; Multi-layer neural network; Neural networks; Neurons; Performance gain;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics, 1991. 'Decision Aiding for Complex Systems, Conference Proceedings., 1991 IEEE International Conference on
Conference_Location :
Charlottesville, VA
Print_ISBN :
0-7803-0233-8
Type :
conf
DOI :
10.1109/ICSMC.1991.169915
Filename :
169915
Link To Document :
بازگشت