DocumentCode :
2038947
Title :
A Non-Invasive Approach for Driving Virtual Talking Heads from Real Facial Movements
Author :
Fanelli, Gabriele ; Fratarcangeli, Marco
Author_Institution :
Univ. of Rome "La Sapienza", Rome
fYear :
2007
fDate :
7-9 May 2007
Firstpage :
1
Lastpage :
4
Abstract :
In this paper, we depict a system to accurately control the facial animation of synthetic virtual heads from the movements of a real person. Such movements are tracked using active appearance models from videos acquired using a cheap webcam. Tracked motion is then encoded by employing the widely used MPEG-4 facial and body animation standard. Each animation frame is thus expressed by a compact subset of facial animation parameters (FAPs) defined by the standard. We precompute, for each FAP, the corresponding facial configuration of the virtual head to animate through an accurate anatomical simulation. By linearly interpolating, frame by frame, the facial configurations corresponding to the FAPs, we obtain the animation of the virtual head in an easy and straightforward way.
Keywords :
computer animation; face recognition; image motion analysis; optical tracking; video signal processing; virtual reality; MPEG-4 body animation standard; MPEG-4 facial animation standard; active appearance models; anatomical simulation; facial animation; facial animation parameters; facial movements; motion tracking; virtual heads; virtual talking heads; webcam; Active appearance model; Face detection; Facial animation; Financial advantage program; Head; MPEG 4 Standard; Shape; Target tracking; Vectors; Videos; 3D Motion Animation; Active Appearance Models; Face Tracking; Facial Animation; Inverse Compositional Algorithm;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
3DTV Conference, 2007
Conference_Location :
Kos Island
Print_ISBN :
978-1-4244-0722-4
Electronic_ISBN :
978-1-4244-0722-4
Type :
conf
DOI :
10.1109/3DTV.2007.4379425
Filename :
4379425
Link To Document :
بازگشت