DocumentCode :
1474831
Title :
Novel Maximum-Margin Training Algorithms for Supervised Neural Networks
Author :
Ludwig, Oswaldo ; Nunes, Urbano
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Coimbra Polo II, Coimbra, Portugal
Volume :
21
Issue :
6
fYear :
2010
fDate :
6/1/2010 12:00:00 AM
Firstpage :
972
Lastpage :
984
Abstract :
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N 3) and space complexity O(N 2) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons ex- - tracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.
Keywords :
backpropagation; computational complexity; gradient methods; optimisation; pattern classification; support vector machines; Fisher discriminant analysis; MLP output-layer hyperplane; MM-based objective function; ROC curve; adaptive learning rate algorithm; assembled neural network; backpropagation methods; constrained optimization problem; gradient descent; information theory; interclass interference minimization; maximum-margin GDX; maximum-margin training algorithms; multilayer perceptron binary classifiers; space complexities; statistical distribution; supervised neural networks; support vector machine training; time complexities; Information theory; maximal-margin (MM) principle; multilayer perceptron (MLP); pattern recognition; supervised learning; Algorithms; Computer Simulation; Feedback; Humans; Information Theory; Learning; Neural Networks (Computer); Pattern Recognition, Automated; ROC Curve;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2010.2046423
Filename :
5451102
Link To Document :
بازگشت