DocumentCode :
3162811
Title :
Sequential Deep Belief Networks
Author :
Andrew, Galen ; Bilmes, Jeff
Author_Institution :
Dept. of Comput. Sci., Univ. of Washington, Seattle, WA, USA
fYear :
2012
fDate :
25-30 March 2012
Firstpage :
4265
Lastpage :
4268
Abstract :
Previous work applying Deep Belief Networks (DBNs) to problems in speech processing has combined the output of a DBN trained over a sliding window of input with an HMM or CRF to model linear-chain dependencies in the output. We describe a new model called Sequential DBN (SDBN) that uses inherently sequential models in all hidden layers as well as in the output layer, so the latent variables can potentially model long-range phenomena. The model introduces minimal computational overhead compared to other DBN approaches to sequential labeling, and achieves comparable performance with a much smaller model (in terms of number of parameters). Experiments on TIMIT phone recognition show that including sequential information at all layers improves accuracy over baseline models that do not use sequential information in the hidden layers.
Keywords :
belief networks; hidden Markov models; speech processing; speech recognition; CRF; HMM; SDBN; TIMIT phone recognition; sequential DBN; sequential deep belief networks; sequential information; sequential models; speech processing; Acoustics; Computational modeling; Hidden Markov models; Speech; Speech processing; Training; Vectors; TIMIT; deep belief network; deep learning; phone recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on
Conference_Location :
Kyoto
ISSN :
1520-6149
Print_ISBN :
978-1-4673-0045-2
Electronic_ISBN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2012.6288861
Filename :
6288861
Link To Document :
بازگشت