DocumentCode :
948753
Title :
Deterministic nonmonotone strategies for effective training of multilayer perceptrons
Author :
Plagianakos, Vassilis P. ; Magoulas, George D. ; Vrahatis, Michael N.
Author_Institution :
Dept. of Math., Patras Univ., Greece
Volume :
13
Issue :
6
fYear :
2002
fDate :
11/1/2002 12:00:00 AM
Firstpage :
1268
Lastpage :
1284
Abstract :
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the M previous epochs, and we propose a subprocedure to dynamically compute M. The nonmonotone strategy can be incorporated in any batch training algorithm and provides fast, stable, and reliable learning. Experimental results in different classes of problems show that this approach improves the convergence speed and success percentage of first-order training algorithms and alleviates the need for fine-tuning problem-depended heuristic parameters.
Keywords :
convergence; feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; adaptive learning rate algorithms; batch training algorithm; convergence; deterministic nonmonotone learning; deterministic training algorithms; error function values; experimental results; fine-tuning; first-order training algorithms; heuristic parameters; maximum error function value; multilayer perceptrons; Artificial intelligence; Backpropagation algorithms; Convergence; Helium; Information systems; Iterative algorithms; Mathematics; Minimization methods; Multilayer perceptrons; Numerical analysis;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2002.804225
Filename :
1058065
Link To Document :
بازگشت