DocumentCode :
672373
Title :
Combining stochastic average gradient and Hessian-free optimization for sequence training of deep neural networks
Author :
Dognin, Pierre ; Goel, Vikas
Author_Institution :
IBM T.J. Watson Res. Center, Yorktown Heights, NY, USA
fYear :
2013
fDate :
8-12 Dec. 2013
Firstpage :
321
Lastpage :
325
Abstract :
Minimum phone error (MPE) training of deep neural networks (DNN) is an effective technique for reducing word error rate of automatic speech recognition tasks. This training is often carried out using a Hessian-free (HF) quasi-Newton approach, although other methods such as stochastic gradient descent have also been applied successfully. In this paper we present a novel stochastic approach to HF sequence training inspired by recently proposed stochastic average gradient (SAG) method. SAG reuses gradient information from past updates, and consequently simulates the presence of more training data than is really observed for each model update. We extend SAG by dynamically weighting the contribution of previous gradients, and by combining it to a stochastic HF optimization. We term the resulting procedure DSAG-HF. Experimental results for training DNNs on 1500h of audio data show that compared to baseline HF training, DSAG-HF leads to better held-out MPE loss after each model parameter update, and converges to an overall better loss value. Furthermore, since each update in DSAG-HF takes place over smaller amount of data, this procedure converges in about half the time as baseline HF sequence training.
Keywords :
Newton method; gradient methods; neural nets; optimisation; speech recognition; stochastic processes; DNN; HF; Hessian-free optimization; MPE; SAG method; automatic speech recognition; combining stochastic average gradient; deep neural networks; minimum phone error; quasi-Newton approach; sequence training; stochastic average gradient; word error rate; Convergence; Data models; Hafnium; Optimization; Stochastic processes; Training; Training data; Deep Neural Network; Hessian-free Optimization; Sequence Training; Stochastic Average Gradient; Stochastic Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on
Conference_Location :
Olomouc
Type :
conf
DOI :
10.1109/ASRU.2013.6707750
Filename :
6707750
Link To Document :
بازگشت