DocumentCode :
1724443
Title :
Robot guidance by human pointing gestures
Author :
Littmann, E. ; Drees, A. ; Ritter, H.
Author_Institution :
Ulm Univ., Germany
fYear :
1996
Firstpage :
449
Lastpage :
457
Abstract :
In this paper we report on the development of the modular neural system “SEE-EAGLE” for the visual guidance of robot pick-and-place actions. Several neural networks are integrated to a single system that visually recognizes human hand pointing gestures from stereo pairs of color video images. The output of the hand recognition stage is further processed by a set of color-sensitive neural networks to determine the Cartesian location of the target object that is referenced by the pointing gesture. Finally, this information is used to guide a robot to pick the target object and place it at another location that can be specified by a second pointing gesture. The accuracy of the combined system allows one to identify the location of the referenced target object to an accuracy of 1 cm in a workspace area of 50×50 cm. In our current environment, this is sufficient to pick and place arbitrarily positioned target objects within the workspace
Keywords :
image recognition; man-machine systems; manipulators; position control; robot vision; self-organising feature maps; stereo image processing; Cartesian location; SEE-EAGLE; color video images; human pointing gestures; man machine system; modular neural system; robot vision; robot visual guidance; self organising maps; stereo pairs; Data gloves; Humans; Image recognition; Image segmentation; Neural networks; Robot control; Robot vision systems; Target recognition; User interfaces; Virtual reality;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks for Identification, Control, Robotics, and Signal/Image Processing, 1996. Proceedings., International Workshop on
Conference_Location :
Venice
Print_ISBN :
0-8186-7456-3
Type :
conf
DOI :
10.1109/NICRSP.1996.542789
Filename :
542789
Link To Document :
بازگشت