DocumentCode :
567309
Title :
Selecting and commanding groups in a multi-robot vision based system
Author :
Milligan, Brian ; Mori, Greg ; Vaughan, Richard
Author_Institution :
Sch. of Comput. Sci., Simon Fraser Univ., Burnaby, BC, Canada
fYear :
2011
fDate :
8-11 March 2011
Firstpage :
415
Lastpage :
415
Abstract :
We present a novel method for a human user to select groups of robots without using any external instruments. We use computer vision techniques to read hand gestures from a user and use the hand gesture information to select single or multiple robots from a population and assign them to a task. To select robots the user simply draws a circle in the air around the robots that the user wants to command. Once the user selects the group of robots, he or she can send them to a location by pointing to a target location. To achieve this we use cameras mounted on mobile robots to find the user´s face and then track his or her hand. Our method exploits an observation from human-robot interaction on pointing, which found a human´s target when pointing is best inferred using the line from the human´s eyes to the user´s extended hand[1]. When circling robots the projected eye-to-hand lines forms a cone-like shape that envelops the selected robots. From a 2D camera mounted on the robot, this cone is seen with the user´s face as the vertex and the hand movements as a circular slice of the cone. We show in the video how the robots can tell if they have been selected by testing to see if the face is within the circle made by the hand. If the face is within the circle then the robot was selected, if the face is outside the circle it was not selected. Following selection the robots then read a command by looking for a pointing gesture, which is detected by an outreached hand. From the pointing gesture the robots collectively infer which target is pointing at by calculating the distance and direction that the hand moved to relative to the face. The selected robots then travel to the target, and unselected robots can then be selected and commanded as desired. The robots communicate their state to the user through LED lights on the robots chassis. When a robot is searching for the user´s face the LEDs flash to get the user´s attention (as it is easiest to find frontal faces). Wh- n the robots find the users face the lights become a solid yellow to indicate that they are ready to be selected. When selected, the robots´ LEDs turn blue to indicate they can now be commanded. Once robots are sent off to a location, remaining robots can then be selected and assigned another task. We demonstrate this method working on low powered Atom Netbooks and off the shelf USB web cameras. This shows the first working implementation of a system that allows a human to select and command groups of robots with out using any external instruments.
Keywords :
gesture recognition; human-robot interaction; image sensors; mobile robots; multi-robot systems; robot vision; video signal processing; 2D camera; LED lights; circling robots; computer vision techniques; cone-like shape; external instruments; group commanding; group selection; hand gestures; human-robot interaction; low powered atom netbooks; mobile robots; multirobot vision based system; pointing gesture; projected eye-to-hand lines; robot LED; robots chassis; shelf USB Web cameras; video; Cameras; Educational institutions; Face; Humans; Robot kinematics; Robot vision systems; Computer Vision; Multiple Robots; Pointing; Selection; Task allocation and coordination; User Feedback;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Human-Robot Interaction (HRI), 2011 6th ACM/IEEE International Conference on
Conference_Location :
Lausanne
ISSN :
2167-2121
Print_ISBN :
978-1-4673-4393-0
Electronic_ISBN :
2167-2121
Type :
conf
Filename :
6281374
Link To Document :
بازگشت