DocumentCode
417241
Title
Automated lip-reading for improved speech intelligibility
Author
McClain, Matthew ; Brady, Kevin ; Brandstein, Michael ; Quatieri, Thomas
Author_Institution
Lincoln Lab., MIT, Lexington, MA, USA
Volume
1
fYear
2004
fDate
17-21 May 2004
Abstract
Various psycho-acoustical experiments have concluded that visual features strongly affect the perception of speech. This contribution is most pronounced in noisy environments where the intelligibility of audio-only speech is quickly degraded. The paper explores the effectiveness of using extracted visual features, such as lip height and width, for improving speech intelligibility in noisy environments. The intelligibility content of these extracted visual features is investigated through an intelligibility test on an animated rendition of the video generated from the extracted visual features, as well as on the original video. These experiments demonstrate that the extracted video features do contain important aspects of intelligibility that may be utilized in augmenting speech enhancement and coding applications. Alternatively, these extracted visual features can be transmitted in a bandwidth effective way to augment speech coders.
Keywords
computer animation; feature extraction; hearing; speech coding; speech enhancement; speech intelligibility; video signal processing; automated lip-reading; speech coders; speech coding; speech enhancement; speech intelligibility; speech perception; visual feature extraction; Acoustic noise; Animation; Bandwidth; Degradation; Feature extraction; Psychology; Speech enhancement; Speech processing; Testing; Working environment noise;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on
ISSN
1520-6149
Print_ISBN
0-7803-8484-9
Type
conf
DOI
10.1109/ICASSP.2004.1326082
Filename
1326082
Link To Document