DocumentCode :
3613042
Title :
Learning Visual Semantic Relationships for Efficient Visual Retrieval
Author :
Hong, Richang ; Yang, Yang ; Wang, Meng ; Hua, Xian-Sheng
Author_Institution :
, Hefei University of Technology, Hefei, China
Volume :
1
Issue :
4
fYear :
2015
Firstpage :
152
Lastpage :
161
Abstract :
In this paper, we investigate how to establish the relationship between semantic concepts based on the large-scale real-world click data from image commercial engine, which is a challenging topic because the click data suffers from the noise such as typos, the same concept with different queries, etc. We first define five specific relationships between concepts. We then extract some concept relationship features in textual and visual domain to train the concept relationship models. The relationship of each pair of concepts will thus be classified into one of the five special relationships. We study the efficacy of the conceptual relationships by applying them to augment imperfect image tags, i.e., improve representative power. We further employ a sophisticated hashing approach to transform augmented image tags into binary codes, which are subsequently used for content-based image retrieval task. Experimental results on NUS-WIDE dataset demonstrate the superiority of our proposed approach as compared to state-of-the-art methods.
Keywords :
Binary codes; Feature extraction; Image retrieval; Semantics; Visualization; Visual concept relationship; hashing; image retrieval; visual concept relationship;
fLanguage :
English
Journal_Title :
Big Data, IEEE Transactions on
Publisher :
ieee
Type :
jour
DOI :
10.1109/TBDATA.2016.2515640
Filename :
7381653
Link To Document :
بازگشت