DocumentCode :
3672481
Title :
DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection
Author :
Wei Shen; Xinggang Wang; Yan Wang; Xiang Bai;Zhijiang Zhang
Author_Institution :
Key Lab of Specialty Fiber Optics and Optical Access Networks, Shanghai University, China
fYear :
2015
fDate :
6/1/2015 12:00:00 AM
Firstpage :
3982
Lastpage :
3991
Abstract :
Contour detection serves as the basis of a variety of computer vision tasks such as image segmentation and object recognition. The mainstream works to address this problem focus on designing engineered gradient features. In this work, we show that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs). While rather than using the networks as a blackbox feature extractor, we customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters. A new loss function, named positive-sharing loss, in which each subclass shares the loss for the whole positive class, is proposed to learn the parameters. Compared to the sofmax loss function, the proposed one, introduces an extra regularizer to emphasizes the losses for the positive and negative classes, which facilitates to explore more discriminative features. Our experimental results demonstrate that learned deep features can achieve top performance on Berkeley Segmentation Dataset and Benchmark (BSDS500) and obtain competitive cross dataset generalization result on the NYUD dataset.
Keywords :
"Shape","Feature extraction","Training","Standards","Machine learning","Neural networks","Data models"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2015.7299024
Filename :
7299024
Link To Document :
بازگشت