DocumentCode :
3179716
Title :
A dynamic viseme model for personalizing a talking head
Author :
Zhiming, Wang ; Lianhong, Cai ; Haizhou, AI
Author_Institution :
Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China
Volume :
2
fYear :
2002
fDate :
26-30 Aug. 2002
Firstpage :
1015
Abstract :
Personalizing a talking head means not only to personalize a head model but also to personalize his talking manner. In this paper, we propose a dynamic viseme model for visual speech synthesis that can deal with co-articulation problem and various pauses in continuous speech. Facial animation parameters (FAPs) defined in MPEG-4 are estimated according to the tracked feature points from two orthogonal views via a mirror setup. Individual talking manner described by model parameters is learnt from FAP data to implement a personalized talking head.
Keywords :
computer animation; speech synthesis; speech-based user interfaces; MPEG-4; co-articulation problem; continuous speech; dynamic viseme model; facial animation parameters; head model; orthogonal views; pauses; talking head; talking manner; tracked feature points; visual speech synthesis; Artificial intelligence; Computer science; Facial animation; Financial advantage program; Head; Laboratories; MPEG 4 Standard; Mirrors; Mouth; Speech synthesis;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Signal Processing, 2002 6th International Conference on
Print_ISBN :
0-7803-7488-6
Type :
conf
DOI :
10.1109/ICOSP.2002.1179960
Filename :
1179960
Link To Document :
بازگشت