DocumentCode
2316656
Title
Locating and tracking facial speech features
Author
Luettin, Juergen ; Thacker, Neil A. ; Beet, Steve W.
Author_Institution
Dept. of Electron. & Electr. Eng., Sheffield Univ., UK
Volume
1
fYear
1996
fDate
25-29 Aug 1996
Firstpage
652
Abstract
This paper describes a robust method for extracting visual speech information from the shape of lips to be used for an automatic speechreading (lipreading) systems. Lip deformation is modelled by a statistically based deformable contour model which learns typical lip deformation from a training set. The main difficulty in locating and tracking lips consists of finding dominant image features for representing the lip contours. We describe the use of a statistical profile model which learns dominant image features from a training set. The model captures global intensity variation due to different illumination and different skin reflectance as well as intensity changes at the inner lip contour due to mouth opening and visibility of teeth and tongue. The method is validated for locating and tracking lip movements on a database of a broad variety of speakers
Keywords
face recognition; object recognition; statistical analysis; automatic speechreading system; dominant image features; facial speech feature location; facial speech feature tracking; global intensity variation; illumination; inner lip contour; intensity changes; lip deformation; lipreading system; skin reflectance; statistical profile model; statistically based deformable contour model; visual speech information extraction; Data mining; Deformable models; Lighting; Lips; Mouth; Reflectivity; Robustness; Shape; Skin; Speech;
fLanguage
English
Publisher
ieee
Conference_Titel
Pattern Recognition, 1996., Proceedings of the 13th International Conference on
Conference_Location
Vienna
ISSN
1051-4651
Print_ISBN
0-8186-7282-X
Type
conf
DOI
10.1109/ICPR.1996.546105
Filename
546105
Link To Document