DocumentCode :
1924202
Title :
An improved MMIE training algorithm for speaker-independent, small vocabulary, continuous speech recognition
Author :
Normandin, Yves ; Morgera, Salvatore D.
Author_Institution :
Centre de Recherche Inf. de Montreal, Que., Canada
fYear :
1991
fDate :
14-17 Apr 1991
Firstpage :
537
Abstract :
Recently, Gopalakrishnan et al. (1989) introduced a reestimation formula for discrete HMMs (hidden Markov models) which applies to rational objective functions like the MMIE (maximum mutual information estimation) criterion. The authors analyze the formula and show how its convergence rate can be substantially improved. They introduce a corrective MMIE training algorithm, which, when applied to the TI/NIST connected digit database, has made it possible to reduce the string error rate by close to 50%. Gopalakrishnan´s result is extended to the continuous case by proposing a new formula for estimating the mean and variance parameters of diagonal Gaussian densities
Keywords :
Markov processes; estimation theory; information theory; speech recognition; MMIE; MMIE training algorithm; TI/NIST connected digit database; continuous speech recognition; convergence rate; diagonal Gaussian densities; hidden Markov models; maximum mutual information estimation; mean; rational objective functions; reestimation formula; small vocabulary; speaker independent recognition; string error rate; variance; Communication systems; Convergence; Error analysis; Hidden Markov models; Maximum likelihood decoding; Maximum likelihood estimation; Mutual information; NIST; Speech recognition; Vocabulary;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on
Conference_Location :
Toronto, Ont.
ISSN :
1520-6149
Print_ISBN :
0-7803-0003-3
Type :
conf
DOI :
10.1109/ICASSP.1991.150395
Filename :
150395
Link To Document :
بازگشت