Title :
Online incremental EM training of GMM and its application to speech processing applications
Author :
Zhang, Yongxin ; Chen, Lixian ; Ran, Xin
Author_Institution :
Qualcomm Incorporate R&D, San Diego, CA, USA
Abstract :
The traditional Expectation-Maximization (EM) training of Gaussian Mixture Model (GMM) is essentially a batch mode procedure which requires the multiple data samples with the sufficient size to update the model parameters. This severely limits the deployment and adaptation of GMM in many real-time online systems since the newly observed data samples are expected to be incorporated into the system upon available via retraining the model. This paper presents a new online incremental EM training procedure of GMM, which aims to perform the EM training incrementally and so can adapt GMM online sample by sample. The proposed method is extended on two kinds of EM algorithms for GMM, namely, Split-and-Merge EM and the traditional EM. Experiments on both the synthetic data and a speech processing task show the advantages and efficiency of the new method.
Keywords :
Gaussian processes; speech processing; Gaussian mixture model; batch mode procedure; online incremental expectation-maximization training; real-time online system; speech processing; split-and-merge expectation-maximization; Adaptation model; Algorithm design and analysis; Data models; Real time systems; Speech; Speech processing; Training; EM; GMM adaptation; unsupervised adaptation;
Conference_Titel :
Signal Processing (ICSP), 2010 IEEE 10th International Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4244-5897-4
DOI :
10.1109/ICOSP.2010.5657133