Title :
The training of neural classifiers with condensed datasets
Author :
Choi, Se-Ho ; Rockett, Peter
Author_Institution :
Dept. of Electron. & Electr. Eng., Univ. of Sheffield, UK
fDate :
4/1/2002 12:00:00 AM
Abstract :
In this paper we apply a k-nearest-neighbor-based data condensing algorithm to the training set of multilayer perceptron neural networks. By removing the overlapping data and retaining only training exemplars adjacent to the decision boundary we are able to significantly speed the network training time while achieving an undegraded misclassification rate compared to a network trained on the unedited training set. We report results on a range of synthetic and real datasets that indicate that a training speed-up of an order of magnitude is typical
Keywords :
feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; pattern classification; data editing; decision boundary; k-nearest-neighbor-based data condensing algorithm; multilayer perceptron neural networks; undegraded misclassification rate; unedited training set; Computational complexity; Computational efficiency; Computer architecture; Degradation; Multi-layer neural network; Multilayer perceptrons; Neural networks; Pattern classification; Prototypes; Sampling methods;
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
DOI :
10.1109/3477.990876