DocumentCode :
179246
Title :
Learning spectral mapping for speech dereverberation
Author :
Kun Han ; Yuxuan Wang ; DeLiang Wang
Author_Institution :
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA
fYear :
2014
fDate :
4-9 May 2014
Firstpage :
4628
Lastpage :
4632
Abstract :
Reverberation distorts human speech and usually has negative effects on speech intelligibility, especially for hearing-impaired listeners. It also causes performance degradation in automatic speech recognition and speaker identification systems. Therefore, the dereverberation problem must be dealt with in daily listening environments. We propose to use deep neural networks (DNNs) to learn a spectral mapping from the reverberant speech to the anechoic speech. The trained DNN produces the estimated spectral representation of the corresponding anechoic speech. We demonstrate that distortion caused by reverberation is substantially attenuated by the DNN whose outputs can be resynthesized to the dereverebrated speech signal. The proposed approach is simple, and our systematic evaluation shows promising dereverberation results, which are significantly better than those of related systems.
Keywords :
neural nets; speech intelligibility; speech recognition; DNN; automatic speech recognition; deep neural networks; hearing-impaired listeners; speaker identification systems; spectral mapping; spectral representation; speech dereverberation; speech intelligibility; Reverberation; Spectrogram; Speech; Speech processing; System-on-chip; Time-frequency analysis; Training; Deep Neural Networks; Spectral Mapping; Speech Dereverberation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on
Conference_Location :
Florence
Type :
conf
DOI :
10.1109/ICASSP.2014.6854479
Filename :
6854479
Link To Document :
بازگشت