DocumentCode :
2702620
Title :
A study of possible improvements to the Alopex training algorithm
Author :
Bia, Alejandro
Author_Institution :
Dept. de Lenguajes y Sistemas Inf., Alicante Univ., Spain
fYear :
2000
fDate :
2000
Firstpage :
125
Lastpage :
130
Abstract :
We studied the performance of the Alopex algorithm, and proposed modifications that improve the training time, and simplified the algorithm. We tested different variations of the algorithm. We describe the best cases and summarize the conclusions we arrived at. One of the proposed variations (99/B) performs slightly faster than the Alopex algorithm described by Unnikrishnan et al. (1994), showing less unsuccessful training attempts, while being simpler to implement. Like Alopex, our versions are based on local correlations between changes in individual weights and changes in the global error measure. Our algorithm is also stochastic, but it differs from Alopex in the fact that no annealing scheme is applied during the training process and hence it uses less parameters
Keywords :
learning (artificial intelligence); probability; recurrent neural nets; Alopex algorithm; learning algorithm; local correlations; recurrent neural networks; Annealing; Computer networks; Distributed computing; Energy measurement; Logistics; Neural networks; Probability; Stochastic processes; Temperature; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2000. Proceedings. Sixth Brazilian Symposium on
Conference_Location :
Rio de Janeiro, RJ
ISSN :
1522-4899
Print_ISBN :
0-7695-0856-1
Type :
conf
DOI :
10.1109/SBRN.2000.889726
Filename :
889726
Link To Document :
بازگشت