DocumentCode :
1760534
Title :
Sparse Multi-Modal Hashing
Author :
Fei Wu ; Zhou Yu ; Yi Yang ; Siliang Tang ; Yin Zhang ; Yueting Zhuang
Author_Institution :
Coll. of Comput. Sci. & Technol., Zhejiang Univ., Hangzhou, China
Volume :
16
Issue :
2
fYear :
2014
fDate :
Feb. 2014
Firstpage :
427
Lastpage :
439
Abstract :
Learning hash functions across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data objects. In this paper, we propose an approach to obtain the sparse codesets for the data objects across different modalities via joint multi-modal dictionary learning, which we call sparse multi-modal hashing (abbreviated as SM2H). In SM2H, both intra-modality similarity and inter-modality similarity are first modeled by a hypergraph, then multi-modal dictionaries are jointly learned by Hypergraph Laplacian sparse coding. Based on the learned dictionaries, the sparse codeset of each data object is acquired and conducted for multi-modal approximate nearest neighbor retrieval using a sensitive Jaccard metric. The experimental results show that SM2H outperforms other methods in terms of mAP and Percentage on two real-world data sets.
Keywords :
data acquisition; file organisation; graph theory; information retrieval; learning (artificial intelligence); SM2H; data object acquisition; heterogenous high-dimensional features; hypergraph Laplacian sparse coding; intermodality similarity; intramodality similarity; joint multimodal dictionary learning; multimodal approximate nearest neighbor retrieval; multimodal data objects; sensitive Jaccard metric; sparse codesets; sparse multimodal hashing; Artificial neural networks; Correlation; Data models; Dictionaries; Dinosaurs; Feature extraction; Search problems; Dictionary learning; multi-modal hashing; sparse coding;
fLanguage :
English
Journal_Title :
Multimedia, IEEE Transactions on
Publisher :
ieee
ISSN :
1520-9210
Type :
jour
DOI :
10.1109/TMM.2013.2291214
Filename :
6665155
Link To Document :
بازگشت