Title :
Audio-visual affect recognition through multi-stream fused HMM for HCI
Author :
Zeng, Zhihong ; Tu, Jilin ; Pianfetti, Brian ; Liu, Ming ; Zhang, Tong ; Zhang, Zhenqiu ; Huang, Thomas S. ; Levinson, Stephen
Author_Institution :
Illinois Univ., Urbana, IL, USA
Abstract :
Advances in computer processing power and emerging algorithms are allowing new ways of envisioning human computer interaction. This paper focuses on the development of a computing algorithm that uses audio and visual sensors to detect and track a user´s affective state to aid computer decision making. Using our multi-stream fused hidden Markov model (MFHMM), we analyzed coupled audio and visual streams to detect 11 cognitive/emotive states. The MFHMM allows the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. Person-independent experimental results from 20 subjects in 660 sequences show that the MFHMM approach performs with an accuracy of 80.61% which outperforms face-only HMM, pitch-only HMM, energy-only HMM, and independent HMM fusion.
Keywords :
hidden Markov models; human computer interaction; image recognition; maximum entropy methods; speech recognition; HCI; audio sensor; audio-visual affect recognition; audio-visual stream; cognitive-emotive state; computer decision making; human computer interaction; maximum entropy principle; multistream fused hidden Markov model; visual sensor; Application software; Coupled mode analysis; Decision making; Entropy; Hidden Markov models; Human computer interaction; Mutual information; Streaming media; Testing; Training data;
Conference_Titel :
Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on
Print_ISBN :
0-7695-2372-2
DOI :
10.1109/CVPR.2005.77