DocumentCode :
2617978
Title :
A generalization technique for nearest-neighbor classifiers
Author :
Kovacs, Zsolt M V ; Guerrieri, Roberto
Author_Institution :
Dept. of Electr. Eng. & Comput. Sci., Bologna Univ., Italy
fYear :
1991
fDate :
18-21 Nov 1991
Firstpage :
1782
Abstract :
The authors present a novel algorithm which provides nearly optimal generalization capabilities to k-NN (nearest neighbor) classifiers. It prunes the training set and, at the same time, improves the performance of the classifier. Its computational cost is a weak function of the dimensionality of the space of the training set and it is not related to any specific formulation of the metric used in the classifier. The asymptotic computational cost of the proposed algorithm is a low-order polynomial of the number of elements in the training set. It is also proved that, given a suitable performance measure, each set of the generalization algorithm monotonically improves the performances of the classifier. Finally, experimentally results stemming from the classification of handwritten digits show that the training set provided to the classifier can be reduced by more than one order of magnitude, thus dramatically reducing the CPU time and the memory required to classify new samples
Keywords :
neural nets; pattern recognition; asymptotic computational; handwritten digits; low-order polynomial; nearest-neighbor classifiers; nearly optimal generalization; pattern recognition; training set; Approximation methods; Computational efficiency; Computer networks; Error analysis; Feedforward systems; Neural network hardware; Neural networks; Performance evaluation; Polynomials; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1991. 1991 IEEE International Joint Conference on
Print_ISBN :
0-7803-0227-3
Type :
conf
DOI :
10.1109/IJCNN.1991.170351
Filename :
170351
Link To Document :
بازگشت