DocumentCode :
3427028
Title :
Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions
Author :
Elhoseiny, Mohamed ; Saleh, Burhan ; Elgammal, Ahmed
Author_Institution :
Dept. of Comput. Sci., Rutgers Univ., New Brunswick, NJ, USA
fYear :
2013
fDate :
1-8 Dec. 2013
Firstpage :
2584
Lastpage :
2591
Abstract :
The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.
Keywords :
image classification; learning (artificial intelligence); object recognition; optimisation; regression analysis; constrained optimization formulation; domain adaptation; fine-grained categorization datasets; knowledge transfer function; object categories; purely textual descriptions; regression adaptation; regression function; zero-shot learning; Birds; Correlation; Optimization; Semantics; Training; Transfer functions; Visualization; Zero shot learning; computer vision; domain adaptation; fine grained object recognition; object recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2013 IEEE International Conference on
Conference_Location :
Sydney, NSW
ISSN :
1550-5499
Type :
conf
DOI :
10.1109/ICCV.2013.321
Filename :
6751432
Link To Document :
بازگشت