Title :
Equivalence of Generative and Log-Linear Models
Author :
Heigold, Georg ; Ney, Hermann ; Lehnen, Patrick ; Gass, Tobias ; Schlüter, Ralf
Author_Institution :
Comput. Sci. Dept., RWTH Aachen Univ., Aachen, Germany
fDate :
7/1/2011 12:00:00 AM
Abstract :
Conventional speech recognition systems are based on hidden Markov models (HMMs) with Gaussian mixture models (GHMMs). Discriminative log-linear models are an alternative modeling approach and have been investigated recently in speech recognition. GHMMs are directed models with constraints, e.g., positivity of variances and normalization of conditional probabilities, while log-linear models do not use such constraints. This paper compares the posterior form of typical generative models related to speech recognition with their log-linear model counterparts. The key result will be the derivation of the equivalence of these two different approaches under weak assumptions. In particular, we study Gaussian mixture models, part-of-speech bigram tagging models, and eventually, the GHMMs. This result unifies two important but fundamentally different modeling paradigms in speech recognition on the functional level. Furthermore, this paper will present comparative experimental results for various speech tasks of different complexity, including a digit string and large-vocabulary continuous speech recognition tasks.
Keywords :
Gaussian processes; hidden Markov models; speech recognition; Gaussian mixture models; HMM; conditional probabilities; digit string; discriminative log-linear models; hidden Markov models; large-vocabulary continuous speech recognition tasks; part-of-speech bigram tagging models; typical generative models; Covariance matrix; Equations; Hidden Markov models; Markov processes; Mathematical model; Speech recognition; Training; Conditional random field (CRF); Gaussian mixture model (GMM); hidden Markov model (HMM); log-linear model;
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
DOI :
10.1109/TASL.2010.2082532