DocumentCode
423651
Title
Softprop: softmax neural network backpropagation learning
Author
Rimer, Michael ; Martinez, Tony
Author_Institution
Dept. of Comput. Sci., Brigham Young Univ., Provo, UT, USA
Volume
2
fYear
2004
fDate
25-29 July 2004
Firstpage
979
Abstract
Multi-layer backpropagation, like many learning algorithms that can create complex decision surfaces, is prone to overfitting. Softprop is a novel learning approach presented here that is reminiscent of the softmax explore-exploit Q-learning search heuristic. It fits the problem while delaying settling into error minima to achieve better generalization and more robust learning. This is accomplished by blending standard SSE optimization with lazy training, a new objective function well suited to learning classification tasks, to form a more stable learning model. Over several machine learning data sets, softprop reduces classification error by 17.1 percent and the variance in results by 38.6 percent over standard SSE minimization.
Keywords
backpropagation; feedforward neural nets; generalisation (artificial intelligence); multilayer perceptrons; optimisation; pattern classification; search problems; Q-learning algorithm; Q-learning search heuristic; backpropagation learning; classification error reduction; complex decision surfaces; generalization; lazy training; machine learning data sets; multilayer backpropagation; softmax neural network; softprop technique; standard sum squared error optimisation; Backpropagation algorithms; Computer science; Delay; Feedforward neural networks; Feedforward systems; Machine learning; Multi-layer neural network; Neural networks; Robustness; Training data;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on
ISSN
1098-7576
Print_ISBN
0-7803-8359-1
Type
conf
DOI
10.1109/IJCNN.2004.1380066
Filename
1380066
Link To Document