DocumentCode
357019
Title
Automated lip synchronized speech driven facial animation
Author
Melek, Zeki ; Akarun, Lale
Author_Institution
Dept. of Comput. Eng., Bogazici Univ., Istanbul, Turkey
Volume
2
fYear
2000
fDate
2000
Firstpage
623
Abstract
Talking 3D synthetic faces are now used in many applications involving human-computer interaction. The lip synchronization of the faces are mostly done mechanically by computer animators. Although there is some work done on automated lip synchronized facial animation, these studies are mostly based on text input. We use speech in Turkish as an input to generate lip synchronized facial animation. Speakers´ recorded voice is converted into lip shape classes on the 3D model. Voice is analyzed and classified using a training set. Lip animation is facilitated using facial muscles and the jaw. Facial muscles are modelled onto our facial model. For more realistic facial animation, facial tissue is modelled as well, and the interactions between epidermis, subcutaneous layer and bone are taken into account. Natural-looking facial animation is achieved in real-time on a personal computer. We also show that our system is compatible with MPEG4
Keywords
computer animation; multimedia computing; realistic images; speech processing; synchronisation; user interfaces; 3D model; 3D synthetic faces; MPEG4; Turkish; bone; computer animators; epidermis; facial model; facial muscles; facial tissue; human-computer interaction; lip shape classes; lip synchronized speech driven facial animation; natural-looking facial animation; personal computer; real-time; subcutaneous layer; training set; Application software; Bones; Epidermis; Face; Facial animation; Facial muscles; MPEG 4 Standard; Microcomputers; Shape; Speech analysis;
fLanguage
English
Publisher
ieee
Conference_Titel
Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on
Conference_Location
New York, NY
Print_ISBN
0-7803-6536-4
Type
conf
DOI
10.1109/ICME.2000.871440
Filename
871440
Link To Document