DocumentCode
3348741
Title
Appearance model based face-to-face transform
Author
Nagai, Takayuki ; Nguyen, Truong
Author_Institution
Dept. of Electron. Eng., Univ. of Electro-Commun., Tokyo, Japan
Volume
5
fYear
2004
fDate
17-21 May 2004
Abstract
In this paper, a novel approach to the face-to-face transform is presented. The face-to-face transform is a technique which transforms one person´s facial actions to the others. In general, 3D models of faces are used for such transformation. Therefore, the facial action parameters must be estimated from the 2D input images, which is not an easy task. On the contrary, our proposed approach is based on the 2D appearance model, instead of the 3D model, so that the model is acquired by learning directly from training images. To achieve this, we investigate making use of the hidden Markov model (HMM) framework, which models the correspondence between an input face and the other one as well as the appearances of both faces. The experimental results show the effectiveness of the proposed method.
Keywords
computer animation; face recognition; gesture recognition; hidden Markov models; image representation; 2D appearance model; HMM; animated characters; appearance model based face-to-face transform; face correspondence measurement; facial actions; hidden Markov model; image representation; talking heads; training image direct learning; Computer interfaces; Face recognition; Facial animation; Head; Hidden Markov models; Image representation; Image sequences; Parameter estimation; Principal component analysis; Robustness;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on
ISSN
1520-6149
Print_ISBN
0-7803-8484-9
Type
conf
DOI
10.1109/ICASSP.2004.1327219
Filename
1327219
Link To Document