DocumentCode :
3466552
Title :
Multi-Concept Multi-Modality Active Learning for Interactive Video Annotation
Author :
Wang, Meng ; Hua, Xian-Sheng ; Song, Yan ; Tang, Jinhui ; Dai, Li-Rong
Author_Institution :
Univ. of Sci. & Technol. of China, Hefei
fYear :
2007
fDate :
17-19 Sept. 2007
Firstpage :
321
Lastpage :
328
Abstract :
Active learning methods have been widely applied to reduce human labeling effort in multimedia annotation tasks. However, in traditional methods multiple concepts are usually sequentially annotated, i.e., each concept is exhaustively annotated before proceeding to the next, without taking the learnabilities of different concepts into consideration. Furthermore, in most of these methods only a single modality is applied. This paper presents a novel multi- concept multi-modality active learning method which ex- changeably annotates multiple concepts in the context of multi-modality. It iteratively selects a concept and a batch of unlabeled samples, and then these samples are annotated with the selected concept. Afier that, a graph-based semi-supervised learning is conducted on each modality for the selected concept. The proposed method takes into account both the learnabilities of different concepts and the potentials of different modalities. Experimental results on TRECVID 2005 benchmark have demonstrated its effectiveness and efficiency.
Keywords :
interactive video; learning (artificial intelligence); multimedia systems; graph-based semisupervised learning; interactive video; multiconcept multimodality active learning; multimedia annotation; Asia; Humans; Labeling; Large-scale systems; Learning systems; Semisupervised learning; Video compression; Videoconference;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Semantic Computing, 2007. ICSC 2007. International Conference on
Conference_Location :
Irvine, CA
Print_ISBN :
978-0-7695-2997-4
Type :
conf
DOI :
10.1109/ICSC.2007.14
Filename :
4338365
Link To Document :
بازگشت