DocumentCode :
1262363
Title :
The training of neural classifiers with condensed datasets
Author :
Choi, Se-Ho ; Rockett, Peter
Author_Institution :
Dept. of Electron. & Electr. Eng., Univ. of Sheffield, UK
Volume :
32
Issue :
2
fYear :
2002
fDate :
4/1/2002 12:00:00 AM
Firstpage :
202
Lastpage :
206
Abstract :
In this paper we apply a k-nearest-neighbor-based data condensing algorithm to the training set of multilayer perceptron neural networks. By removing the overlapping data and retaining only training exemplars adjacent to the decision boundary we are able to significantly speed the network training time while achieving an undegraded misclassification rate compared to a network trained on the unedited training set. We report results on a range of synthetic and real datasets that indicate that a training speed-up of an order of magnitude is typical
Keywords :
feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; pattern classification; data editing; decision boundary; k-nearest-neighbor-based data condensing algorithm; multilayer perceptron neural networks; undegraded misclassification rate; unedited training set; Computational complexity; Computational efficiency; Computer architecture; Degradation; Multi-layer neural network; Multilayer perceptrons; Neural networks; Pattern classification; Prototypes; Sampling methods;
fLanguage :
English
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
Publisher :
ieee
ISSN :
1083-4419
Type :
jour
DOI :
10.1109/3477.990876
Filename :
990876
Link To Document :
بازگشت