DocumentCode :
2693783
Title :
Improved versions of learning vector quantization
Author :
Kohonen, Teuvo
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
545
Abstract :
The author introduces a variant of (supervised) learning vector quantization (LVQ) and discusses practical problems associated with the application of the algorithms. The LVQ algorithms work explicitly in the input domain of the primary observation vectors, and their purpose is to approximate the theoretical Bayes decision borders using piecewise linear decision surfaces. This is done by purported optimal placement of the class codebook vectors in signal space. As the classification decision is based on the nearest-neighbor selection among the codebook vectors, its computation is very fast. It has turned out that the differences between the presented algorithms in regard to the remaining discretization error are not significant, and thus the choice of the algorithm may be based on secondary arguments, such as stability in learning, in which respect the variant introduced (LVQ2.1) seems to be superior to the others. A comparative study of several methods applied to speech recognition is included
Keywords :
data compression; encoding; learning systems; neural nets; pattern recognition; speech recognition; LVQ2.1; class codebook vectors; classification decision; codebook vectors initialization; discretization error; learning stability; learning vector quantization; nearest-neighbor selection; pattern recognition; piecewise linear decision surfaces; speech recognition; supervised learning; theoretical Bayes decision borders;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137622
Filename :
5726582
Link To Document :
بازگشت