DocumentCode :
1400442
Title :
Fast training of multilayer perceptrons
Author :
Verma, Brijesh
Author_Institution :
Sch. of Inf. Technol., Griffith Univ, Qld, Australia
Volume :
8
Issue :
6
fYear :
1997
fDate :
11/1/1997 12:00:00 AM
Firstpage :
1314
Lastpage :
1320
Abstract :
Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an inverse transformation for linearization of nonlinear output activation functions, direct solution matrix methods for training the weights of the output layer; and gradient descent, the delta rule, and other proposed techniques for training the weights of the hidden layers. The approach has been implemented and tested on many problems. Experimental results, including training times and recognition accuracy, are given. Generally, the approach achieves accuracy as good as or better than perceptrons trained using error backpropagation, and the training process is much faster than the error backpropagation algorithm and also avoids local minima and paralysis
Keywords :
iterative methods; learning (artificial intelligence); linearisation techniques; matrix algebra; multilayer perceptrons; delta rule; direct solution matrix methods; error backpropagation; fast training; gradient descent; inverse transformation; local minimum avoidance; multilayer perceptrons; nonlinear output activation function linearization; output layer weights; paralysis avoidance; Artificial neural networks; Backpropagation algorithms; Gold; Iterative methods; Linear systems; Multilayer perceptrons; Neurons; Nonlinear equations; Supervised learning; Testing;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.641454
Filename :
641454
Link To Document :
بازگشت