Title :
Improving Robustness of Deep Neural Network Acoustic Models via Speech Separation and Joint Adaptive Training
Author :
Narayanan, Arun ; DeLiang Wang
Author_Institution :
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA
Abstract :
Although deep neural network (DNN) acoustic models are known to be inherently noise robust, especially with matched training and testing data, the use of speech separation as a frontend and for deriving alternative feature representations has been shown to improve performance in challenging environments. We first present a supervised speech separation system that significantly improves automatic speech recognition (ASR) performance in realistic noise conditions. The system performs separation via ratio time-frequency masking; the ideal ratio mask (IRM) is estimated using DNNs. We then propose a framework that unifies separation and acoustic modeling via joint adaptive training. Since the modules for acoustic modeling and speech separation are implemented using DNNs, unification is done by introducing additional hidden layers with fixed weights and appropriate network architecture. On the CHiME-2 medium-large vocabulary ASR task, and with log mel spectral features as input to the acoustic model, an independently trained ratio masking frontend improves word error rates by 10.9% (relative) compared to the noisy baseline. In comparison, the jointly trained system improves performance by 14.4%. We also experiment with alternative feature representations to augment the standard log mel features, like the noise and speech estimates obtained from the separation module, and the standard feature set used for IRM estimation. Our best system obtains a word error rate of 15.4% (absolute), an improvement of 4.6 percentage points over the next best result on this corpus.
Keywords :
acoustic signal processing; adaptive signal processing; error statistics; feature extraction; neural nets; signal representation; source separation; spectral analysis; speech synthesis; time-frequency analysis; CHiME-2 medium-large vocabulary ASR; DNN; IRM estimation; automatic speech recognition; deep neural network acoustic model; ideal ratio mask; joint adaptive training; network architecture; realistic noise conditions; spectral feature representation; standard feature set; standard log mel features; supervised speech separation system; time-frequency masking; word error rate; Acoustics; Adaptation models; Joints; Noise; Speech; Speech processing; Training; CHiME-2; joint training; ratio masking; robust ASR; time-frequency masking;
Journal_Title :
Audio, Speech, and Language Processing, IEEE/ACM Transactions on
DOI :
10.1109/TASLP.2014.2372314