DocumentCode :
3707875
Title :
Mine the fine: Fine-grained fragment discovery
Author :
M. Hadi Kiapour;Wei Di;Vignesh Jagadeesh;Robinson Piramuthu
Author_Institution :
University of North Carolina at Chapel Hill Chapel Hill, NC, USA
fYear :
2015
Firstpage :
3555
Lastpage :
3559
Abstract :
While discriminative visual element mining has been introduced before, in this paper we present an approach that requires minimal annotation in both training and test time. Given only a bounding box localization of the foreground objects, our approach automatically transforms the input images into a roughly-aligned pose space and discovers the most discriminative visual fragments for each category. These fragments are then used to learn robust classifiers that discriminate between very similar categories under challenging conditions such as large variations in pose or habitats. The minimal required input, is a critical characteristic that enables our approach to generalize over visual domains where expert knowledge is not readily available. Moreover, our approach takes advantage of deep networks that are targeted towards fine-grained classification. It learns mid-level representations that are specific to a category and generalize well across the category instances at the same time. Our evaluations demonstrate that the automatically learned representation based on discriminative fragments, significantly outperforms globally extracted deep features in classification accuracy.
Keywords :
"Feature extraction","Yttrium","Training","Birds","Visualization","Robustness","Machine learning"
Publisher :
ieee
Conference_Titel :
Image Processing (ICIP), 2015 IEEE International Conference on
Type :
conf
DOI :
10.1109/ICIP.2015.7351466
Filename :
7351466
Link To Document :
بازگشت