DocumentCode :
2527904
Title :
Learning flexible, multi-modal human-robot interaction by observing human-human-interaction
Author :
Schmidt-Rohr, Sven R. ; Lösch, Martin ; Dillmann, Rüdiger
Author_Institution :
Inst. for Anthropomatics (IFA), Karlsruhe Inst. of Technol., Karlsruhe, Germany
fYear :
2010
fDate :
13-15 Sept. 2010
Firstpage :
582
Lastpage :
587
Abstract :
This paper presents a technique to learn flexible action selection in autonomous, multi-modal human-robot interaction (HRI) from observing multi-modal human-human interaction (HHI). A model is generated using the proposed technique with symbolic states and actions, representing the scope of the observed mission. Variations in human behavior can be learned as stochastic action effects while execution time perception noise is taken into account, using likelihood models. During execution, the model is used for dynamic action selection in HRI situations. The model as well as the evaluation system integrate the interaction elements of spoken dialog, human body configuration and exchanged objects. The technique is evaluated on a multi-modal service robot which is both able to observe the demonstration of two humans as well as execute the generated mission autonomously.
Keywords :
human-robot interaction; intelligent robots; mobile robots; service robots; teaching; HHI; HRI; autonomous multimodal human-robot interaction; dynamic action selection; evaluation system; execution time perception noise; interactive robot teaching; likelihood models; multimodal human-human interaction; multimodal service robot; Computational modeling; Humans; Joints; Markov processes; Noise; Robots;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
RO-MAN, 2010 IEEE
Conference_Location :
Viareggio
ISSN :
1944-9445
Print_ISBN :
978-1-4244-7991-7
Type :
conf
DOI :
10.1109/ROMAN.2010.5598670
Filename :
5598670
Link To Document :
بازگشت