Title :
Multi-modal human robot interaction for map generation
Author :
Ghidary, Saeed Shiry ; Nakata, Yasushi ; Saito, Hiroshi ; Hattori, Motofumi ; Takamori, Toshi
Author_Institution :
Dept. of Comput. Syst., Kobe Univ., Japan
Abstract :
Describes an interface for multi modal human robot interaction, which enables people to introduce a newcomer robot to different attributes of objects and places in the room through speech commands and hand gestures. The robot makes an environment map of the room based on knowledge learned through communication with human and uses this map for navigation. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system uses a combination of multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to go near to it or locate that object´s position in the room. The position of objects in the room is located by a monocular camera vision and depth from focus method
Keywords :
gesture recognition; intelligent control; mobile robots; natural language interfaces; path planning; position control; position measurement; robot vision; speech-based user interfaces; depth from focus; hand gestures; map generation; model matching; monocular camera vision; multi-modal human robot interaction; natural language processing; navigation; object localization; posture recognition; speech commands; Educational robots; Human robot interaction; Intelligent robots; Mobile robots; Navigation; Orbital robotics; Rehabilitation robotics; Robot sensing systems; Robot vision systems; Speech;
Conference_Titel :
Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on
Conference_Location :
Maui, HI
Print_ISBN :
0-7803-6612-3
DOI :
10.1109/IROS.2001.976404