DocumentCode :
44631
Title :
Video Annotation via Image Groups from the Web
Author :
Han Wang ; Xinxiao Wu ; Yunde Jia
Author_Institution :
Beijing Lab. of Intell. Inf. Technol., Beijing Inst. of Technol., Beijing, China
Volume :
16
Issue :
5
fYear :
2014
fDate :
Aug. 2014
Firstpage :
1282
Lastpage :
1291
Abstract :
Searching desirable events in uncontrolled videos is a challenging task. Current researches mainly focus on obtaining concepts from numerous labeled videos. But it is time consuming and labor expensive to collect a large amount of required labeled videos for training event models under various circumstances. To alleviate this problem, we propose to leverage abundant Web images for videos since Web images contain a rich source of information with many events roughly annotated and taken under various conditions. However, knowledge from the Web is noisy and diverse, brute force knowledge transfer of images may hurt the video annotation performance. Therefore, we propose a novel Group-based Domain Adaptation (GDA) learning framework to leverage different groups of knowledge (source domain) queried from the Web image search engine to consumer videos (target domain). Different from traditional methods using multiple source domains of images, our method organizes the Web images according to their intrinsic semantic relationships instead of their sources. Specifically, two different types of groups (i.e., event-specific groups and concept-specific groups) are exploited to respectively describe the event-level and concept-level semantic meanings of target-domain videos. Under this framework, we assign different weights to different image groups according to the relevances between the source groups and the target domain, and each group weight represents how contributive the corresponding source image group is to the knowledge transferred to the target video. In order to make the group weights and group classifiers mutually beneficial and reciprocal, a joint optimization algorithm is presented for simultaneously learning the weights and classifiers, using two novel data-dependent regularizers. Experimental results on three challenging video datasets (i.e., CCV, Kodak, and YouTube) demonstrate the effectiveness of leveraging grouped knowledge gained from Web images for video a- notation.
Keywords :
learning (artificial intelligence); search engines; video retrieval; GDA learning; Web images; concept-level semantic meanings; event-level semantic meanings; group-based domain adaptation; image group; labeled videos; novel data-dependent regularizers; search engine; video annotation; Knowledge engineering; Semantics; Standards; Support vector machines; Training; Training data; YouTube; Concept-specific group; domain adaptation; event-specific group; video annotation;
fLanguage :
English
Journal_Title :
Multimedia, IEEE Transactions on
Publisher :
ieee
ISSN :
1520-9210
Type :
jour
DOI :
10.1109/TMM.2014.2312251
Filename :
6776501
Link To Document :
بازگشت