Title of article :
Learning RGB-D descriptors of garment parts for informed robot grasping
Author/Authors :
Ramisa، نويسنده , , Arnau and Alenyà، نويسنده , , Guillem and Moreno-Noguer، نويسنده , , Francesc and Torras، نويسنده , , Carme، نويسنده ,
Abstract :
Robotic handling of textile objects in household environments is an emerging application that has recently received considerable attention thanks to the development of domestic robots. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields a desired configuration.
s work we propose a vision-based method, built on the Bag of Visual Words approach, that combines appearance and 3D information to detect parts suitable for grasping in clothes, even when they are highly wrinkled.
o contribute a new, annotated, garment part dataset that can be used for benchmarking classification, part detection, and segmentation algorithms. The dataset is used to evaluate our approach and several state-of-the-art 3D descriptors for the task of garment part detection. Results indicate that appearance is a reliable source of information, but that augmenting it with 3D information can help the method perform better with new clothing items.
Keywords :
Computer vision , Pattern recognition , Machine Learning , Garment part detection , Classification , Bag of visual words
Journal title :
Astroparticle Physics