DocumentCode :
3232976
Title :
A survey on mouth modeling and analysis for Sign Language recognition
Author :
Antonakos, Epameinondas ; Roussos, Anastasios ; Zafeiriou, Stefanos
Author_Institution :
Dept. of Comput., Imperial Coll. London, London, UK
fYear :
2015
fDate :
4-8 May 2015
Firstpage :
1
Lastpage :
7
Abstract :
Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR.
Keywords :
Internet; handicapped aids; human computer interaction; pose estimation; sign language recognition; speech recognition; ASLR research community; Internet; automatic mouth expression; automatic sign language recognition; body pose; deaf; head pose; human-computer interaction; lexical sign meaning; mouth actions; mouth analysis; mouth modeling; native languages; nonverbal mouth gestures; reading-writing skills; visual speech recognition; visual syllables; Context; Face recognition; Facial features; Hidden Markov models; Manuals; Mouth; Three-dimensional displays;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on
Conference_Location :
Ljubljana
Type :
conf
DOI :
10.1109/FG.2015.7163162
Filename :
7163162
Link To Document :
بازگشت