Title : 
Regularized Subspace Gaussian Mixture Models for Speech Recognition
         
        
            Author : 
Lu, Liang ; Ghoshal, Arnab ; Renals, Steve
         
        
            Author_Institution : 
Univ. of Edinburgh, Edinburgh, UK
         
        
        
        
        
            fDate : 
7/1/2011 12:00:00 AM
         
        
        
        
            Abstract : 
Subspace Gaussian mixture models (SGMMs) provide a compact representation of the Gaussian parameters in an acoustic model, but may still suffer from over-fitting with insufficient training data. In this letter, the SGMM state parameters are estimated using a penalized maximum-likelihood objective, based on l1 and l2 regularization, as well as their combination, referred to as the elastic net, for robust model estimation. Experiments on the 5000-word Wall Street Journal transcription task show word error rate reduction and improved model robustness with regularization.
         
        
            Keywords : 
Gaussian processes; maximum likelihood estimation; speech recognition; SGMM state parameters; acoustic model; penalized maximum-likelihood objective estimation; regularized subspace Gaussian mixture models; robust model estimation; speech recognition; wall street journal transcription task; word error rate reduction; Acoustics; Data models; Hidden Markov models; Maximum likelihood estimation; Robustness; Speech recognition; $ell_{1}/ell_{2}$ -norm penalty; Acoustic modeling; elastic net; regularization; sparsity; subspace Gaussian mixture models;
         
        
        
            Journal_Title : 
Signal Processing Letters, IEEE
         
        
        
        
        
            DOI : 
10.1109/LSP.2011.2157820