DocumentCode :
2701571
Title :
Recovering the linguistic components of the manual signs in American Sign Language
Author :
Ding, Liya ; Martinez, Aleix M.
Author_Institution :
Ohio State Univ., Columbus
fYear :
2007
fDate :
5-7 Sept. 2007
Firstpage :
447
Lastpage :
452
Abstract :
Manual signs in American sign language (ASL) are constructed using three building blocks -handshape, motion, and place of articulations. Only when these three are successfully estimated, can a sign by uniquely identified. Hence, the use of pattern recognition techniques that use only a subset of these is inappropriate. To achieve accurate classifications, the motion, the handshape and their three-dimensional position need to be recovered. In this paper, we define an algorithm to determine these three components form a single video sequence of two-dimensional pictures of a sign. We demonstrated the use of our algorithm in describing and recognizing a set of manual signs in ASL.
Keywords :
computational linguistics; handicapped aids; image classification; image motion analysis; image sequences; American sign language; deaf people; linguistic component; pattern recognition technique; two-dimensional picture; video sequence; Computer interfaces; Data mining; Deafness; Fingers; Handicapped aids; Mouth; Pattern recognition; Robustness; Torso; Video sequences;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Advanced Video and Signal Based Surveillance, 2007. AVSS 2007. IEEE Conference on
Conference_Location :
London
Print_ISBN :
978-1-4244-1696-7
Electronic_ISBN :
978-1-4244-1696-7
Type :
conf
DOI :
10.1109/AVSS.2007.4425352
Filename :
4425352
Link To Document :
بازگشت