Title :
Adaptation of context-dependent deep neural networks for automatic speech recognition
Author :
Kaisheng Yao ; Dong Yu ; Seide, Frank ; Hang Su ; Li Deng ; Yifan Gong
Author_Institution :
Online Service Div., Microsoft Corp., Redmond, WA, USA
Abstract :
In this paper, we evaluate the effectiveness of adaptation methods for context-dependent deep-neural-network hidden Markov models (CD-DNN-HMMs) for automatic speech recognition. We investigate the affine transformation and several of its variants for adapting the top hidden layer. We compare the affine transformations against direct adaptation of the softmax layer weights. The feature-space discriminative linear regression (fDLR) method with the affine transformations on the input layer is also evaluated. On a large vocabulary speech recognition task, a stochastic gradient ascent implementation of the fDLR and the top hidden layer adaptation is shown to reduce word error rates (WERs) by 17% and 14%, respectively, compared to the baseline DNN performances. With a batch update implementation, the softmax layer adaptation technique reduces WERs by 10%. We observe that using bias shift performs as well as doing scaling plus bias shift.
Keywords :
gradient methods; hidden Markov models; neural nets; regression analysis; speech recognition; affine transformation; automatic speech recognition; bias shift; context-dependent deep neural network; fDLR method; feature-space discriminative linear regression; hidden Markov model; softmax layer adaptation technique; stochastic gradient ascent implementation; vocabulary speech recognition task; word error rate; Adaptation models; Artificial neural networks; Hidden Markov models; Speech recognition; Stochastic processes; Vectors; Context-Dependent Deep-Neural-Networks; HMM; speaker adaptation; speech recognition;
Conference_Titel :
Spoken Language Technology Workshop (SLT), 2012 IEEE
Conference_Location :
Miami, FL
Print_ISBN :
978-1-4673-5125-6
Electronic_ISBN :
978-1-4673-5124-9
DOI :
10.1109/SLT.2012.6424251