Title :
A generalization of the maximum a posteriori training algorithm for mixture priors
Author :
Buhrke, Eric R. ; Liu, Chen
Author_Institution :
Human Interface lab., Motorola Inc., Naperville, IL, USA
Abstract :
Prior information about the operating environment of a speech recognizer is often general and abstract. Frequently information such as the number of speakers with foreign accents or the number of callers using cellular phones is readily available. Incorporating this information during model training is difficult. This paper generalizes the popular MAP training algorithm derived by Gauvain and Lee (1994) so that more prior information can be utilized during training. The priors are in the form of mixture distributions with each mixture component representing a unique property of the data and the mixing weights defined by the a priori constraints. Using the training algorithms derived here it is shown that significant performance improvements can be obtained
Keywords :
maximum likelihood estimation; speech recognition; MAP training algorithm; a priori constraints; callers; foreign accents; maximum a posteriori training algorithm; mixture distributions; mixture priors; model training; operating environment; performance; prior information; speech recognizer; Cellular phones; Density functional theory; Encapsulation; Hidden Markov models; Humans; Probability density function; Robustness; Speech recognition; Testing; Training data;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 2000. ICASSP '00. Proceedings. 2000 IEEE International Conference on
Conference_Location :
Istanbul
Print_ISBN :
0-7803-6293-4
DOI :
10.1109/ICASSP.2000.859129