DocumentCode :
3256920
Title :
Principal components of expressive speech animation
Author :
Kshirsagar, Suinedha ; Molet, Tom ; Magnenat-Thalmann, Nadia
Author_Institution :
MIRALab CUI, Geneva Univ., Switzerland
fYear :
2001
fDate :
2001
Firstpage :
38
Lastpage :
44
Abstract :
We describe a new technique for expressive and realistic speech animation. We use an optical tracking system that extracts the 3D positions of markers attached at the feature point locations to capture the movements of the face of a talking person. We use the feature points as defined by the MPEG-4 standard. We then form a vector space representation by using the principal component analysis of this data. We call this space “expression and viseme space”. Such a representation not only offers insight into improving realism of animated faces, but also gives a new way of generating convincing speech animation and blending between several expressions. As the rigid body movements and deformation constraints on the facial movements have been considered through this analysis, the resulting facial animation is very realistic
Keywords :
computer animation; optical tracking; principal component analysis; realistic images; user interfaces; MPEG-4 standard; expressive speech animation; face animation; facial movements; feature point locations; optical tracking system; principal component analysis; realistic speech animation; rigid body movements; vector space representation; Data mining; Deformable models; Facial animation; Interpolation; MPEG 4 Standard; Mesh generation; Muscles; Principal component analysis; Speech; Tracking;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Graphics International 2001. Proceedings
Conference_Location :
Hong Kong
ISSN :
1530-1052
Print_ISBN :
0-7695-1007-8
Type :
conf
DOI :
10.1109/CGI.2001.934656
Filename :
934656
Link To Document :
بازگشت