DocumentCode :
290047
Title :
New ways to use LVQ-codebooks together with hidden Markov models
Author :
Torkkola, Kari
Author_Institution :
Inst. Dalle Molle d´´Intelligence Artificielle Perceptive, Martigny, Switzerland
Volume :
i
fYear :
1994
fDate :
19-22 Apr 1994
Abstract :
We introduce a novel way to employ codebooks trained by learning vector quantization together with hidden Markov models. In previous work, LVQ-codebooks have been used as frame labelers. The resulting label stream has been modeled and decoded by discrete observation HMMs. We present a way to extract more information out of the LVQ stage. This is accomplished by modeling the class-wise quantization errors of LVQ by continuous density HMMs. Experiments in a speaker dependent phoneme spotting task verify that significant improvements are attainable over plain continuous density HMMs, or over the hybrid of LVQ and discrete HMMs
Keywords :
hidden Markov models; speech coding; speech recognition; vector quantisation; LVQ-codebooks; class-wise quantization errors; continuous density HMM; discrete observation HMM; frame labelers; hidden Markov models; label stream; speaker dependent phoneme spotting; Artificial neural networks; Automatic speech recognition; Concatenated codes; Data mining; Hidden Markov models; Maximum likelihood decoding; Neural networks; Vector quantization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1994. ICASSP-94., 1994 IEEE International Conference on
Conference_Location :
Adelaide, SA
ISSN :
1520-6149
Print_ISBN :
0-7803-1775-0
Type :
conf
DOI :
10.1109/ICASSP.1994.389271
Filename :
389271
Link To Document :
بازگشت