DocumentCode
253995
Title
Enriching Visual Knowledge Bases via Object Discovery and Segmentation
Author
Xinlei Chen ; Shrivastava, Ashish ; Gupta, Arpan
fYear
2014
fDate
23-28 June 2014
Firstpage
2035
Lastpage
2042
Abstract
There have been some recent efforts to build visual knowledge bases from Internet images. But most of these approaches have focused on bounding box representation of objects. In this paper, we propose to enrich these knowledge bases by automatically discovering objects and their segmentations from noisy Internet images. Specifically, our approach combines the power of generative modeling for segmentation with the effectiveness of discriminative models for detection. The key idea behind our approach is to learn and exploit top-down segmentation priors based on visual subcategories. The strong priors learned from these visual subcategories are then combined with discriminatively trained detectors and bottom up cues to produce clean object segmentations. Our experimental results indicate state-of-the-art performance on the difficult dataset introduced by [29] Rubinstein et al. We have integrated our algorithm in NEIL for enriching its knowledge base [5]. As of 14th April 2014, NEIL has automatically generated approximately 500K segmentations using web data.
Keywords
Internet; image representation; image segmentation; knowledge based systems; Internet images; NEIL; bounding box representation; discriminatively trained detectors; generative modeling; object discovery; object segmentation; top-down segmentation; visual knowledge bases; visual subcategories; web data; Detectors; Image segmentation; Internet; Joints; Noise measurement; Semantics; Visualization;
fLanguage
English
Publisher
ieee
Conference_Titel
Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on
Conference_Location
Columbus, OH
Type
conf
DOI
10.1109/CVPR.2014.261
Filename
6909658
Link To Document