Title :
Incremental ML estimation of HMM parameters for efficient training
Author :
Gotoh, Yoshihiko ; Silverman, Harvey F.
Author_Institution :
Div. of Eng., Brown Univ., Providence, RI, USA
Abstract :
Conventional training of a hidden Markov model (HMM) is performed by an expectation-maximization algorithm using a maximum likelihood (ML) criterion. It was reported that, using an incremental variant of maximum a posteriori estimation, substantial speed improvements could be obtained. The approach requires a prior distribution when the training starts, although it is difficult to find an appropriate prior for some cases. This paper presents a new approach for achieving an efficient training of HMM parameters using the standard ML criterion. A prior distribution is not required. The algorithm sequentially selects a subset of data from the training set, updates the parameters from the subset, then iterates until convergence. There is a solid theoretical foundation that ensures a monotone likelihood improvement; thus stable convergence is guaranteed. Experimental results indicate substantially faster convergence than the standard batch training algorithm while holding the same level of recognition performance
Keywords :
hidden Markov models; maximum likelihood estimation; numerical stability; speech recognition; HMM parameters; expectation-maximization algorithm; experimental results; hidden Markov model; incremental ML estimation; maximum a posteriori estimation; maximum likelihood criterion; speech recognition performance; speed improvements; stable convergence; training; Convergence; Electronic mail; Expectation-maximization algorithms; Extraterrestrial measurements; Hidden Markov models; Maximum likelihood estimation; Parameter estimation; Solids; Speech; Statistics;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on
Print_ISBN :
0-7803-3192-3
DOI :
10.1109/ICASSP.1996.543188