Title :
Category Modeling from Just a Single Labeling: Use Depth Information to Guide the Learning of 2D Models
Author :
Quanshi Zhang ; Xuan Song ; Xiaowei Shao ; Shibasaki, Ryosuke ; Huijing Zhao
Author_Institution :
Center for Spatial Inf. Sci., Univ. of Tokyo, Tokyo, Japan
Abstract :
An object model base that covers a large number of object categories is of great value for many computer vision tasks. As artifacts are usually designed to have various textures, their structure is the primary distinguishing feature between different categories. Thus, how to encode this structural information and how to start the model learning with a minimum of human labeling become two key challenges for the construction of the model base. We design a graphical model that uses object edges to represent object structures, and this paper aims to incrementally learn this category model from one labeled object and a number of casually captured scenes. However, the incremental model learning may be biased due to the limited human labeling. Therefore, we propose a new strategy that uses the depth information in RGBD images to guide the model learning for object detection in ordinary RGB images. In experiments, the proposed method achieves superior performance as good as the supervised methods that require the labeling of all target objects.
Keywords :
computer aided instruction; computer vision; feature extraction; object detection; 2D model learning; RGBD images; category modeling; computer vision; depth information; graphical model; human labeling; object categories; object detection; object edge; object model; object structures; single labeling; structural information; supervised methods; target objects; Computational modeling; Graphical models; Image edge detection; Image segmentation; Object detection; Solid modeling; Three-dimensional displays; Graph matching; Object detection; RGBD images;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
Conference_Location :
Portland, OR
DOI :
10.1109/CVPR.2013.32