Title :
Generating semantic visual templates for video databases
Author :
Chen, William ; Chang, Shih-Fu
Author_Institution :
Dept. of Electr. Eng., Columbia Univ., New York, NY, USA
Abstract :
We describe a system that generates semantic visual templates (SVTs) for video databases. From a single query sketch, new queries are automatically generated with each one representing a different view of the initial sketch. The combination of the original and new queries forms a large set of potential queries for a content-based video retrieval system. Through Bayesian relevance feedback, the user narrows the choices to an exemplar set. This exemplar set, or SVTs, represents personalized views of a concept and an effective set of queries to retrieve a general category of images and videos. We have generated SVTs for several classes of videos, including sunsets, high jumpers, and slalom skiers. Our experiments show that the user can quickly converge upon SVTs with optimal performance, achieving over 85% of the precision from icons chosen by exhaustive search
Keywords :
content-based retrieval; relevance feedback; video databases; video signal processing; Bayesian relevance feedback; content-based video retrieval system; exemplar set; icons; query sketch; semantic visual template generation; video databases; Bayesian methods; Bridges; Content based retrieval; Feedback; Image converters; Image databases; Image retrieval; Information retrieval; Search engines; Visual databases;
Conference_Titel :
Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on
Conference_Location :
New York, NY
Print_ISBN :
0-7803-6536-4
DOI :
10.1109/ICME.2000.871013