DocumentCode :
352335
Title :
A generalization of the maximum a posteriori training algorithm for mixture priors
Author :
Buhrke, Eric R. ; Liu, Chen
Author_Institution :
Human Interface lab., Motorola Inc., Naperville, IL, USA
Volume :
2
fYear :
2000
fDate :
2000
Abstract :
Prior information about the operating environment of a speech recognizer is often general and abstract. Frequently information such as the number of speakers with foreign accents or the number of callers using cellular phones is readily available. Incorporating this information during model training is difficult. This paper generalizes the popular MAP training algorithm derived by Gauvain and Lee (1994) so that more prior information can be utilized during training. The priors are in the form of mixture distributions with each mixture component representing a unique property of the data and the mixing weights defined by the a priori constraints. Using the training algorithms derived here it is shown that significant performance improvements can be obtained
Keywords :
maximum likelihood estimation; speech recognition; MAP training algorithm; a priori constraints; callers; foreign accents; maximum a posteriori training algorithm; mixture distributions; mixture priors; model training; operating environment; performance; prior information; speech recognizer; Cellular phones; Density functional theory; Encapsulation; Hidden Markov models; Humans; Probability density function; Robustness; Speech recognition; Testing; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 2000. ICASSP '00. Proceedings. 2000 IEEE International Conference on
Conference_Location :
Istanbul
ISSN :
1520-6149
Print_ISBN :
0-7803-6293-4
Type :
conf
DOI :
10.1109/ICASSP.2000.859129
Filename :
859129
Link To Document :
بازگشت