DocumentCode
1923391
Title
Multimodal Emotion Recognition Using a Spontaneous Filipino Emotion Database
Author
Dy, Marc Lanze Ivan C ; Espinosa, Ivan Vener L ; Go, Paul Patrick V ; Mendez, Charles Martin M ; Cu, Jocelynn W.
Author_Institution
Center for Empathic Human-Comput. Interactions, De La Salle Univ., Manila, Philippines
fYear
2010
fDate
11-13 Aug. 2010
Firstpage
1
Lastpage
5
Abstract
Human-computer interaction is moving towards giving computers the ability to adapt and give feedback in accordance to a user´s emotion. Studies on emotion recognition show that combining face and voice signals produce higher recognition rates compared to using either one individually. In addition, majority of the emotion corpus used on these systems were modeled based on acted data with actors who tend to exaggerate emotions. This study focus on the development of a multimodal emotion recognition system that is trained using a spontaneous Filipino emotion database. The system extracts voice features and facial features that are then classified into the correct emotion label using support vector machines. Based on test results, recognizing emotions using voice only yielded 40% accuracy; using face only, 86%; and using a combination of voice and face yielded 80%.
Keywords
emotion recognition; face recognition; feature extraction; human computer interaction; support vector machines; facial feature extraction; human computer interaction; multimodal emotion recognition; spontaneous Filipino emotion database; support vector machine; voice feature extraction; Accuracy; Emotion recognition; Face; Face recognition; Feature extraction; Speech recognition; Support vector machines;
fLanguage
English
Publisher
ieee
Conference_Titel
Human-Centric Computing (HumanCom), 2010 3rd International Conference on
Conference_Location
Cebu
Print_ISBN
978-1-4244-7567-4
Type
conf
DOI
10.1109/HUMANCOM.2010.5563314
Filename
5563314
Link To Document