Title :
Data independent visual vocabulary
Author :
Faheema, A.G. ; Lakshmi, A. ; Priyanka, MS ; Rakshit, Subrata
Author_Institution :
CAIR, DRDO Complex, Bangalore, India
Abstract :
In this paper we present a novel method of generating visual vocabulary. Unlike traditional methods, where the visual vocabulary is generated from very specific image data, we use an analytic generative mechanism that is independent of training data. The motivation behind generating data-independent visual vocabulary is due to its usage in distributed applications. It is necessary to use a common, fixed visual vocabulary for multiple, dynamically changing image repositories, which may need to share data and support remote queries. Generating visual vocabulary in itself is time consuming job, as it requires capturing data and clustering high dimensional data. The advantage of using a data-independent visual vocabulary eases the design of Net Centric computing system for image and video retrieval. We have examined large number of visual vocabularies generated from various data set such as Caltech, Pascal, Correll and worked out approximation model for visual vocabulary. Experimental results shows that our visual vocabulary outperforms the domain specific visual vocabularies. We empirically conclude that our visual vocabulary suits the requirement of Net Centric applications.
Keywords :
content-based retrieval; feature extraction; image processing; image retrieval; learning (artificial intelligence); analytic generative mechanism; approximation model; domain specific visual vocabularies; high dimensional data clustering; image data; image repositories; image retrieval; net centric computing system; remote queries; training data; video retrieval; Entropy; Feature extraction; Histograms; Object recognition; Vectors; Visualization; Vocabulary;
Conference_Titel :
Signal Processing and Communications (SPCOM), 2012 International Conference on
Conference_Location :
Bangalore
Print_ISBN :
978-1-4673-2013-9
DOI :
10.1109/SPCOM.2012.6290027