Title :
Factored sparse inverse covariance matrices
Author_Institution :
Dept. of Electr. Eng., Washington Univ., Seattle, WA, USA
Abstract :
Most HMM-based speech recognition systems use Gaussian mixtures as observation probability density functions. An important goal in all such systems is to improve parsimony. One method is to adjust the type of covariance matrices used. In this work, factored sparse inverse covariance matrices are introduced. Based on U´DU factorization, the inverse covariance matrix can be represented using linear regressive coefficients which 1) correspond to sparse patterns in the inverse covariance matrix (and therefore represent conditional independence properties of the Gaussian), and 2), result in a method of partial tying of the covariance matrices without requiring non-linear EM update equations. Results show that the performance of full-covariance Gaussians can be matched by factored sparse inverse covariance Gaussians having significantly fewer parameters
Keywords :
Gaussian processes; covariance matrices; hidden Markov models; sparse matrices; speech recognition; Gaussian mixtures; HMM-based speech recognition systems; U´DU factorization; conditional independence properties; factored sparse inverse covariance matrices; linear regressive coefficients; observation probability density functions; parsimony; partial tying; performance; Automatic speech recognition; Cepstral analysis; Covariance matrix; Hidden Markov models; Matrix decomposition; Nonlinear equations; Probability density function; Random variables; Robustness; Speech recognition;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 2000. ICASSP '00. Proceedings. 2000 IEEE International Conference on
Conference_Location :
Istanbul
Print_ISBN :
0-7803-6293-4
DOI :
10.1109/ICASSP.2000.859133