DocumentCode :
1434013
Title :
Using Object Affordances to Improve Object Recognition
Author :
Castellini, C. ; Tommasi, T. ; Noceti, N. ; Odone, F. ; Caputo, B.
Author_Institution :
LIRA-Lab., Univ. degli Studi di Genova, Genova, Italy
Volume :
3
Issue :
3
fYear :
2011
Firstpage :
207
Lastpage :
215
Abstract :
The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.
Keywords :
cameras; feature extraction; image sensors; knowledge representation; object recognition; robot vision; statistical analysis; camera images; function mapping visual feature; knowledge representation; object affordances; object models; object recognition; statistical method; Cameras; Grasping; Humans; Object recognition; Support vector machines; Training; Visualization; Biologically inspired feature extraction; learning systems; robot tactile systems; robot vision systems;
fLanguage :
English
Journal_Title :
Autonomous Mental Development, IEEE Transactions on
Publisher :
ieee
ISSN :
1943-0604
Type :
jour
DOI :
10.1109/TAMD.2011.2106782
Filename :
5699912
Link To Document :
بازگشت