DocumentCode :
2619258
Title :
Terminal attractor learning algorithms for back propagation neural networks
Author :
Wang, Sheng-De ; Hsu, Ching-Hao
Author_Institution :
Dept. of Electr. Eng., Nat. Taiwan Univ., Taipei, Taiwan
fYear :
1991
fDate :
18-21 Nov 1991
Firstpage :
183
Abstract :
Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm
Keywords :
learning systems; neural nets; optimisation; global optimization; heuristic terminal attractor backpropagation; learning systems; multilayer networks; neural nets; neurocomputing; terminal attractor backpropagation; time-varying gains; weight update law; Convergence; Electronic mail; Error correction; Feedforward neural networks; Modeling; Multi-layer neural network; Multidimensional signal processing; Neural networks; Neurons; Signal processing algorithms;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1991. 1991 IEEE International Joint Conference on
Print_ISBN :
0-7803-0227-3
Type :
conf
DOI :
10.1109/IJCNN.1991.170401
Filename :
170401
Link To Document :
بازگشت