Title :
AR-GARCH in Presence of Noise: Parameter Estimation and Its Application to Voice Activity Detection
Author :
Mousazadeh, Saman ; Cohen, Israel
Author_Institution :
Dept. of Electr. Eng., Technion - Israel Inst. of Technol., Haifa, Israel
fDate :
5/1/2011 12:00:00 AM
Abstract :
This paper presents a new method for voice activity detection (VAD) based on the autoregressive-generalized autoregressive conditional heteroscedasticity (AR-GARCH) model. The speech signal is modeled as an AR-GARCH process in the time domain, and the likelihood ratio is computed and compared to a threshold. The time-varying variance of the speech signal needed for computing the likelihood function under speech presence hypothesis, is estimated using the AR-GARCH model. The model parameters are estimated using a novel technique based on the recursive maximum likelihood (RML) estimation. The variance of the additive noise, a critical issue in designing a VAD, is estimated using the improved minima controlled recursive averaging (IMCRA) method, which is properly modified to be applicable to noise variance estimation in the time domain. The performances of the VAD and the parameter estimation method are examined under several conditions. Experimental results indicate the robustness of the AR-GARCH based VAD both to noise variations and low signal-to-noise ratio (SNR) conditions.
Keywords :
maximum likelihood estimation; parameter estimation; speech recognition; time-domain analysis; AR-GARCH model; IMCRA method; RML estimation; SNR condition; VAD; autoregressive-generalized autoregressive conditional heteroscedasticity model; improved minima controlled recursive averaging method; parameter estimation; recursive maximum likelihood estimation; signal-to-noise ratio condition; speech signal; time domain; time-varying variance; voice activity detection; Autoregressive-generalized autoregressive conditional heteroscedasticity (AR-GARCH); noisy data; nonstationary noise; parameter estimation; voice activity detector (VAD);
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
DOI :
10.1109/TASL.2010.2070494