DocumentCode :
2738999
Title :
Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face
Author :
Yoshitomi, Yasunari ; Kim, Sung Ill ; Kawano, Takako ; Kilazoe, T.
Author_Institution :
Dept. of Comput. Sci. & Syst. Eng., Miyazaki Univ., Japan
fYear :
2000
fDate :
2000
Firstpage :
178
Lastpage :
183
Abstract :
A new integrated method is presented to recognize the emotional expressions of human using both voices and facial expressions. For voices, we use such prosodic parameters as pitch signals, energy, and their derivatives, which are trained by hidden Markov model for recognition. For facial expressions, we use feature parameters from thermal images in addition to visible images, which are trained by neural networks for recognition. The thermal images are observed by infrared ray which is not influenced by lighting conditions. The total recognition rates show better performance than that obtained from each single experiment. The results are compared with the recognition by human questionnaire
Keywords :
computer vision; face recognition; feature extraction; hidden Markov models; infrared imaging; neural nets; sensor fusion; speech recognition; computer vision; facial expression recognition; feature extraction; hidden Markov model; human emotional state; neural networks; sensor fusion; thermal images; voices expression; Discrete cosine transforms; Emotion recognition; Face recognition; Humans; Image recognition; Layout; Sensor fusion; Skin; Speech recognition; Virtual reality;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robot and Human Interactive Communication, 2000. RO-MAN 2000. Proceedings. 9th IEEE International Workshop on
Conference_Location :
Osaka
Print_ISBN :
0-7803-6273-X
Type :
conf
DOI :
10.1109/ROMAN.2000.892491
Filename :
892491
Link To Document :
بازگشت