DocumentCode :
3748893
Title :
Simultaneous Deep Transfer Across Domains and Tasks
Author :
Eric Tzeng;Judy Hoffman;Trevor Darrell;Kate Saenko
fYear :
2015
Firstpage :
4068
Lastpage :
4076
Abstract :
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.
Keywords :
"Training","Visualization","Adaptation models","Standards","Correlation","Semantics","Robots"
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN :
2380-7504
Type :
conf
DOI :
10.1109/ICCV.2015.463
Filename :
7410820
Link To Document :
بازگشت