Title :
Learning from long-term and multimodal interaction between human and humanoid robot
Author :
Suzuki, Kenji ; Harada, Atsushi ; Suzuki, Tomoya
Author_Institution :
Dept. of Intell. Interaction Technol., Univ. of Tsukuba, Tsukuba
Abstract :
We have been developing a humanoid robot that interacts with people through multimodal, long-term and continuous learning. Three approaches i) word acquisition, ii) self-modeling and iii) action-oriented perception will be introduced in this paper. In particular, we first describe the word acquisition from raw multimodal sensory stimulus by seeing given objects and listening to spoken utterance by humans without symbolic representations of semantics. The robot, therefore, is able to utter the learnt words based on its own phonemes which correspond to the categorical phonetic feature map. In addition, the action oriented methods such as self-modeling and understanding of objects dynamics will also be described. The theoretical background underlying the proposed methods is also given. We will then show the performance of the proposed method through some experiments with the implemented system for a humanoid robot.
Keywords :
human-robot interaction; humanoid robots; learning systems; object detection; robot vision; speech processing; action-oriented perception; categorical phonetic feature map; continuous learning; humanoid robot; multimodal human-robot interaction; object detection; raw multimodal sensory stimulus; self-modeling method; spoken utterance; word acquisition; Computational modeling; Hidden Markov models; Humanoid robots; Humans; Intelligent robots; Magnetic heads; Motion planning; Robot sensing systems; Speech; Unsupervised learning;
Conference_Titel :
Industrial Electronics, 2008. IECON 2008. 34th Annual Conference of IEEE
Conference_Location :
Orlando, FL
Print_ISBN :
978-1-4244-1767-4
Electronic_ISBN :
1553-572X
DOI :
10.1109/IECON.2008.4758510