DocumentCode
2020789
Title
Learning the communication of intent prior to physical collaboration
Author
Strabala, Kyle ; Lee, Min Kyung ; Dragan, Anca ; Forlizzi, Jodi ; Srinivasa, Siddhartha S.
Author_Institution
Robot. Inst., Carnegie Mellon Univ., Pittsburgh, CA, USA
fYear
2012
fDate
9-13 Sept. 2012
Firstpage
968
Lastpage
973
Abstract
When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participating in this communication enables us to predict a physical action rather than react to it, producing seamless collaboration. In this paper, we automatically learn key discriminative features that predict the intent to handover an object using machine learning techniques. We train and test our algorithm on multi-channel vision and pose data collected from an extensive user study in an instrumented kitchen. Our algorithm outputs a tree of possibilities, automatically encoding various types of pre-handover communication. A surprising outcome is that mutual gaze and inter-personal distance, often cited as being key for interaction, were not key discriminative features. Finally, we discuss the immediate and future impact of this work for human-robot interaction.
Keywords
human-robot interaction; learning (artificial intelligence); pose estimation; robot vision; discriminative features; gaze; gestures; human-robot interaction; instrumented kitchen; intent communication; inter-personal distance; machine learning techniques; movement; multichannel vision; physical collaboration tasks; picnic basket; pose data; posture; prehandover communication; speech; Decision trees; Feature extraction; Humans; Receivers; Robots;
fLanguage
English
Publisher
ieee
Conference_Titel
RO-MAN, 2012 IEEE
Conference_Location
Paris
ISSN
1944-9445
Print_ISBN
978-1-4673-4604-7
Electronic_ISBN
1944-9445
Type
conf
DOI
10.1109/ROMAN.2012.6343875
Filename
6343875
Link To Document