DocumentCode :
292404
Title :
Use of visual and tactile data for generation of 3-D object hypotheses
Author :
Boshra, Michael ; Zhang, Hong
Author_Institution :
Dept. of Comput. Sci., Alberta Univ., Edmonton, Alta., Canada
Volume :
1
fYear :
1994
fDate :
12-16 Sep 1994
Firstpage :
73
Abstract :
Most existing 3-D object recognition/localization systems rely on a single type of sensory data, although several sensors may be available in a robot task to provide information about the objects to be recognized. In this paper, the authors present a technique to localize polyhedral objects by integrating visual and tactile data. It is assumed that visual data is provided by a monocular visual sensor, while tactile data by a planar-array tactile sensor in contact with the object to be localized. The authors focus on using tactile data in the hypothesis generation phase to reduce the requirements of visual features for localization to a V-junction only. The main concept of this technique is to compute a set of partial pose hypotheses off-line by utilizing tactile data, and then complement these partial hypotheses on-line using visual data. The technique presented is tested using simulated and real data
Keywords :
image sensors; mechanoception; object recognition; robot vision; 3-D object hypotheses generation; monocular visual sensor; planar-array tactile sensor; polyhedral objects localisation; tactile data; visual data; Automatic testing; Computational modeling; Councils; Object recognition; Robot sensing systems; Robotic assembly; Robotics and automation; Sensor phenomena and characterization; Sensor systems; Tactile sensors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems '94. 'Advanced Robotic Systems and the Real World', IROS '94. Proceedings of the IEEE/RSJ/GI International Conference on
Conference_Location :
Munich
Print_ISBN :
0-7803-1933-8
Type :
conf
DOI :
10.1109/IROS.1994.407407
Filename :
407407
Link To Document :
بازگشت