DocumentCode :
3703396
Title :
Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition
Author :
Yelin Kim
Author_Institution :
Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, USA
fYear :
2015
Firstpage :
748
Lastpage :
753
Abstract :
My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.
Keywords :
"Emotion recognition","Speech","Speech recognition","Visualization","Production","Motion segmentation","Analytical models"
Publisher :
ieee
Conference_Titel :
Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on
Electronic_ISBN :
2156-8111
Type :
conf
DOI :
10.1109/ACII.2015.7344653
Filename :
7344653
Link To Document :
بازگشت