DocumentCode
2374087
Title
Creating Emotional Speech for Conversational Agents
Author
Do, Anh Tuan ; King, Scott A.
Author_Institution
Dept. of Comput. Sci., Texas A&M Univ. - Corpus Christi, TX, USA
fYear
2011
fDate
15-16 May 2011
Firstpage
107
Lastpage
110
Abstract
This paper presents an automatic, real-time approach which is capable of creating expressive speech using a set of mathematical models. This approach allows showing emotions in synthetic animated speech in both audio and video. We collect the facial muscle movement data through a tracking system with a high-speed camera, and use that data to create mathematical models for the visual signal. Our emotional model drives muscle parameters to control the shape of the face and prosodic parameters to control the generation of synthetic audio. By applying these models, the expressions can be automatically generated with some optional parameters. We demonstrate the utility of our emotional model by developing a chat system that uses the six universal emotions to create synthetic emotional speech.
Keywords
computer animation; human computer interaction; software agents; chat system; conversational agent; expressive speech; facial animation; facial expression; mathematical model; synthetic animated speech; synthetic emotional speech; Animation; Cameras; Face; Mathematical model; Muscles; Speech; Tracking; animated speech; emotional speech; facial animation;
fLanguage
English
Publisher
ieee
Conference_Titel
Digital Media and Digital Content Management (DMDCM), 2011 Workshop on
Conference_Location
Hangzhou
Print_ISBN
978-1-4577-0271-6
Electronic_ISBN
978-0-7695-4413-7
Type
conf
DOI
10.1109/DMDCM.2011.56
Filename
5959703
Link To Document