Title :
Buried Markov models for speech recognition
Author_Institution :
Int. Comput. Sci. Inst., Berkeley, CA, USA
Abstract :
Good HMM-based speech recognition performance requires at most minimal inaccuracies to be introduced by HMM conditional independence assumptions. In this work, HMM conditional independence assumptions are relaxed in a principled way. For each hidden state value, additional dependencies are added between observation elements to increase both accuracy and discriminability. These additional dependencies are chosen according to natural statistical dependencies extant in training data that are not well modeled by an HMM. The result is called a buried Markov model (BMM) because the underlying Markov chain in an HMM is further hidden (buried) by specific cross-observation dependencies. Gaussian mixture HMMs are extended to represent BMM dependencies and new EM update equations are derived. On preliminary experiments with a large-vocabulary isolated-word speech database, BMMs are able to achieve an 11% improvement in WER with only a 9.5% increase in the number of parameters using a single state per mono-phone speech recognition system
Keywords :
Gaussian processes; correlation methods; hidden Markov models; probability; speech recognition; EM update equations; Gaussian mixture HMM; HMM conditional independence; Markov chain; accuracy; buried Markov models; cross-observation dependencies; discriminability; experiments; graphical models; large-vocabulary isolated-word speech database; observation elements; phonebook results; speech recognition; speech recognition performance; statistical dependencies; training data; word error rate; Automatic speech recognition; Computer science; Databases; Equations; Error analysis; Hidden Markov models; Power system modeling; Probability distribution; Speech recognition; Training data;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on
Print_ISBN :
0-7803-5041-3
DOI :
10.1109/ICASSP.1999.759766