DocumentCode :
1590189
Title :
Learning from object motion using visual saliency and speech phonemes by a humanoid robot
Author :
Jin, Guolin ; Suzuki, Kenji
Author_Institution :
Grad. Sch. of Syst. & Inf. Eng., Univ. of Tsukuba, Tsukuba, Japan
fYear :
2009
Firstpage :
1495
Lastpage :
1500
Abstract :
In this paper, we describe a novel method of word acquisition through multimodal interaction between a humanoid robot and humans. The developed robot realizes word, actually verb, acquisition from raw multimodal sensory stimulus by seeing movement of the given objects and listening to spoken utterance by humans without symbolic representations of semantics. In addition, the robot can utter the learnt words base on its own phonemes which correspond to the categorical phonetic feature map. We consider that words bind directly to non-symbolic perceptual physical feature: such as visual features of the given object and acoustic features of given utterance, aside from symbolic representations of semantics.
Keywords :
human-robot interaction; humanoid robots; speech processing; categorical phonetic feature map; humanoid robot; nonsymbolic perceptual physical feature; object motion; raw multimodal sensory; speech phonemes; symbolic representations; visual saliency; Biomimetics; Dictionaries; Hidden Markov models; Human robot interaction; Humanoid robots; Robot sensing systems; Speech; Statistics; Systems engineering and theory; Unsupervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robotics and Biomimetics (ROBIO), 2009 IEEE International Conference on
Conference_Location :
Guilin
Print_ISBN :
978-1-4244-4774-9
Electronic_ISBN :
978-1-4244-4775-6
Type :
conf
DOI :
10.1109/ROBIO.2009.5420988
Filename :
5420988
Link To Document :
بازگشت