DocumentCode :
3752211
Title :
Integrating prosodic information into recurrent neural network language model for speech recognition
Author :
Tong Fu;Yang Han;Xiangang Li;Yi Liu;Xihong Wu
Author_Institution :
Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, 100871
fYear :
2015
Firstpage :
1194
Lastpage :
1197
Abstract :
Prosody is a kind of cues that are critical to human speech perception and comprehension, so it is plausible to integrate prosodic information into machine speech recognition. However, as a result of the supra-segmental nature, it is hard to integrate prosodic information with conventional acoustic features. Recently, RNNLMs have shown to be the state-of-the-art language model in many tasks. We thus attempt to integrate prosodic information into RNNLMs for improving speech recognition performance based on rescoring strategy. Firstly, three word-level prosodic features are extracted from speech and then passed to RNNLMs separately. Therefore RNNLMs predict the next word based on prosodic features and word history. Experiments conducted on LibriSpeech Corpus show that the word error rate decreases from 8.07% to 7.96%. Secondly, prosodic information is combined on feature-level and model-level for further improvements and word error rate decreases 4.71% relatively.
Keywords :
"Speech","Hidden Markov models","Speech recognition","Context","Acoustics","Training","Feature extraction"
Publisher :
ieee
Conference_Titel :
Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015 Asia-Pacific
Type :
conf
DOI :
10.1109/APSIPA.2015.7415462
Filename :
7415462
Link To Document :
بازگشت