DocumentCode
730307
Title
Speech dereverberation using a learned speech model
Author
Dawen Liang ; Hoffman, Matthew D. ; Mysore, Gautham J.
fYear
2015
fDate
19-24 April 2015
Firstpage
1871
Lastpage
1875
Abstract
We present a general single-channel speech dereverberation method based on an explicit generative model of reverberant and noisy speech. To regularize the model, we use a pre-learned speech model of clean and dry speech as a prior and perform posterior inference over the latent clean speech. The reverberation kernel and additive noise are estimated under the maximum-likelihood framework. Our model assumes no prior knowledge about specific speakers or rooms, and consequently our method can automatically adapt to various reverberant and noisy conditions. We evaluate the proposed model with both simulated data and real recordings from the REVERB Challenge1 in the task of speech enhancement and obtain results comparable to or better than the state-of-the-art.
Keywords
reverberation; speech enhancement; additive noise; explicit generative model; learned speech model; maximum-likelihood framework; noisy speech; reverberation kernel; single-channel speech dereverberation method; speech enhancement; Art; Erbium; Noise; Speech; Bayesian modeling; dereverberation; non-negative matrix factorization; variational inference;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location
South Brisbane, QLD
Type
conf
DOI
10.1109/ICASSP.2015.7178295
Filename
7178295
Link To Document