DocumentCode :
2954757
Title :
Dyadic transfer learning for cross-domain image classification
Author :
Wang, Hua ; Nie, Feiping ; Huang, Heng ; Ding, Chris
Author_Institution :
Dept. of Comput. Sci. & Eng., Univ. of Texas at Arlington, Arlington, TX, USA
fYear :
2011
fDate :
6-13 Nov. 2011
Firstpage :
551
Lastpage :
556
Abstract :
Because manual image annotation is both expensive and labor intensive, in practice we often do not have sufficient labeled images to train an effective classifier for the new image classification tasks. Although multiple labeled image data sets are publicly available for a number of computer vision tasks, a simple mixture of them cannot achieve good performance due to the heterogeneous properties and structures between different data sets. In this paper, we propose a novel nonnegative matrix tri-factorization based transfer learning framework, called as Dyadic Knowledge Transfer (DKT) approach, to transfer cross-domain image knowledge for the new computer vision tasks, such as classifications. An efficient iterative algorithm to solve the proposed optimization problem is introduced. We perform the proposed approach on two benchmark image data sets to simulate the real world cross-domain image classification tasks. Promising experimental results demonstrate the effectiveness of the proposed approach.
Keywords :
computer vision; image classification; computer vision tasks; cross-domain image classification; data sets; dyadic knowledge transfer; dyadic transfer learning; manual image annotation; Computer vision; Feature extraction; Image color analysis; Knowledge transfer; Optimization; Semantics; Videos;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2011 IEEE International Conference on
Conference_Location :
Barcelona
ISSN :
1550-5499
Print_ISBN :
978-1-4577-1101-5
Type :
conf
DOI :
10.1109/ICCV.2011.6126287
Filename :
6126287
Link To Document :
بازگشت