DocumentCode :
2705224
Title :
Robust speaker verification via fusion of speech and lip modalities
Author :
Wark, T. ; Sridharan, S. ; Chandran, V.
Author_Institution :
Sch. of Electr. & Electron. Syst. Eng., Queensland Univ. of Technol., Brisbane, Qld., Australia
Volume :
6
fYear :
1999
fDate :
15-19 Mar 1999
Firstpage :
3061
Abstract :
This paper investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. It has been previously shown in our own work, and in the work of others, that features extracted from a speaker´s moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms the performance of either sub-system. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise
Keywords :
acoustic noise; audio-visual systems; feature extraction; gesture recognition; sensor fusion; speaker recognition; background noise; error rates; false acceptance; false rejection; features extraction; fusion; moving lips; performance; robust speaker verification; speech features; speech information; weighting; Acoustic noise; Authentication; Background noise; Feature extraction; Hidden Markov models; Laboratories; Lips; Noise robustness; Speaker recognition; Speech;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on
Conference_Location :
Phoenix, AZ
ISSN :
1520-6149
Print_ISBN :
0-7803-5041-3
Type :
conf
DOI :
10.1109/ICASSP.1999.757487
Filename :
757487
Link To Document :
بازگشت