Title of article :
Latent visual context learning for web image applications
Author/Authors :
Zhou، نويسنده , , Wengang and Tian، نويسنده , , Qi and Lu، نويسنده , , Yijuan and Yang، نويسنده , , Linjun and Li، نويسنده , , Houqiang، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2011
Pages :
11
From page :
2263
To page :
2273
Abstract :
Recently, image representation based on bag-of-visual-words (BoW) model has been popularly applied in image and vision domains. In BoW, a visual codebook of visual words is defined, usually by clustering local features, to represent any novel image with the occurrence of its contained visual words. Given a set of images, we argue that the significance of each image is determined by the significance of its contained visual words. Traditionally, the significances of visual words are defined by term frequency-inverse document frequency (tf-idf), which cannot necessarily capture the intrinsic visual context. In this paper, we propose a new scheme of latent visual context learning (LVCL). The visual context among images and visual words is formulated from latent semantic context and visual link graph analysis. With LVCL, the importance of visual words and images will be distinguished from each other, which will facilitate image level applications, such as image re-ranking and canonical image selection. idate our approach on text-query based search results returned by Google Image. Experimental results demonstrate the effectiveness and potentials of our LVCL in applications of image re-ranking and canonical image selection, over the state-of-the-art approaches.
Keywords :
Set coverage , Image re-ranking , Visual context , Canonical image selection
Journal title :
PATTERN RECOGNITION
Serial Year :
2011
Journal title :
PATTERN RECOGNITION
Record number :
1736774
Link To Document :
بازگشت