DocumentCode :
1484519
Title :
Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space
Author :
Nicolaou, Mihalis A. ; Gunes, Hatice ; Pantic, Maja
Author_Institution :
Dept. of Comput., Imperial Coll., London, UK
Volume :
2
Issue :
2
fYear :
2011
Firstpage :
92
Lastpage :
105
Abstract :
Past research in analysis of human affect has focused on recognition of prototypic expressions of six basic emotions based on posed data acquired in laboratory settings. Recently, there has been a shift toward subtle, continuous, and context-specific interpretations of affective displays recorded in naturalistic and real-world settings, and toward multimodal analysis and recognition of human affect. Converging with this shift, this paper presents, to the best of our knowledge, the first approach in the literature that: 1) fuses facial expression, shoulder gesture, and audio cues for dimensional and continuous prediction of emotions in valence and arousal space, 2) compares the performance of two state-of-the-art machine learning techniques applied to the target problem, the bidirectional Long Short-Term Memory neural networks (BLSTM-NNs), and Support Vector Machines for Regression (SVR), and 3) proposes an output-associative fusion framework that incorporates correlations and covariances between the emotion dimensions. Evaluation of the proposed approach has been done using the spontaneous SAL data from four subjects and subject-dependent leave-one-sequence-out cross validation. The experimental results obtained show that: 1) on average, BLSTM-NNs outperform SVR due to their ability to learn past and future context, 2) the proposed output-associative fusion framework outperforms feature-level and model-level fusion by modeling and learning correlations and patterns between the valence and arousal dimensions, and 3) the proposed system is well able to reproduce the valence and arousal ground truth obtained from human coders.
Keywords :
emotion recognition; face recognition; learning (artificial intelligence); neural nets; regression analysis; support vector machines; BLSTM-NN; SVR; affective displays; arousal dimensions; audio cues; bidirectional long short-term memory neural networks; context-specific interpretations; continuous emotion prediction; continuous prediction; dimensional emotion prediction; emotion dimensions; facial expression; feature-level fusion; human affect recognition; laboratory settings; model-level fusion; multimodal analysis; naturalistic settings; output-associative fusion framework; posed data; prototypic expressions recognition; real-world settings; regression; shoulder gesture; state-of-the-art machine learning techniques; subject-dependent leave-one-sequence-out cross validation; support vector machines; valence-arousal space; Acoustics; Emotion recognition; Feature extraction; Hidden Markov models; Humans; Sensors; Visualization; Dimensional affect recognition; continuous affect prediction; emotional acoustic signals; facial expressions; multicue and multimodal fusion; output-associative fusion.; shoulder gestures; valence and arousal dimensions;
fLanguage :
English
Journal_Title :
Affective Computing, IEEE Transactions on
Publisher :
ieee
ISSN :
1949-3045
Type :
jour
DOI :
10.1109/T-AFFC.2011.9
Filename :
5740839
Link To Document :
بازگشت