• DocumentCode
    75934
  • Title

    Near-Duplicate Image Retrieval Based on Contextual Descriptor

  • Author

    Jinliang Yao ; Bing Yang ; Qiuming Zhu

  • Author_Institution
    Comput. Sci. Sch., Hangzhou Dianzi Univ., Hangzhou, China
  • Volume
    22
  • Issue
    9
  • fYear
    2015
  • fDate
    Sept. 2015
  • Firstpage
    1404
  • Lastpage
    1408
  • Abstract
    The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.
  • Keywords
    image matching; image retrieval; visual databases; bag-of-visual-words model; contextual descriptor; contextual similarity; copydays dataset; discriminative power; dominant orientation; local features; near-duplicate image retrieval; quantization errors; referential visual; spatial position; visual words matching; Context; Feature extraction; Image resolution; Image retrieval; Indexing; Quantization (signal); Visualization; Contextual descriptor; near-duplicate image retrieval; spatial constraint; visual word;
  • fLanguage
    English
  • Journal_Title
    Signal Processing Letters, IEEE
  • Publisher
    ieee
  • ISSN
    1070-9908
  • Type

    jour

  • DOI
    10.1109/LSP.2014.2377795
  • Filename
    6975087