DocumentCode
80467
Title
The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence
Author
Lim, Andrew ; Okuno, Hiroshi G.
Author_Institution
Grad. Sch. of Inf., Kyoto Univ., Kyoto, Japan
Volume
6
Issue
2
fYear
2014
fDate
Jun-14
Firstpage
126
Lastpage
138
Abstract
We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, music and even as little as point light displays, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI´s development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity, and Extent). Offline experiments with MEI support its cross-modal generalization ability: a model trained with voice data can recognize happiness, sadness, and fear in a completely different modality-human gait. User evaluations of the MEI robot speaking, gesturing and walking show that it can reliably express multimodal happiness and sadness using only the voice-trained model as a basis.
Keywords
emotion recognition; face recognition; gait analysis; gesture recognition; human-robot interaction; intelligent robots; path planning; social aspects of automation; MEI robot; MEI robot gesturing; MEI robot speaking; MEI robot walking; SIRE; cross-modal generalization ability; dynamic vocal contours; emotions express; facial emotion; motherese; multimodal emotional intelligence; music; perceptual mapping; robot movement; robot voice; speed-intensity-irregularity-extent; vocal features; vocal input; Emotion recognition; Face; Feature extraction; Psychology; Robots; Speech; Speech recognition; Cross-modal recognition; SIRE; emotion recognition; gait; gaussian mixture; gesture; motherese; voice;
fLanguage
English
Journal_Title
Autonomous Mental Development, IEEE Transactions on
Publisher
ieee
ISSN
1943-0604
Type
jour
DOI
10.1109/TAMD.2014.2317513
Filename
6798757
Link To Document