Title :
Supervised Binary Hash Code Learning with Jensen Shannon Divergence
Abstract :
This paper proposes to learn binary hash codes within a statistical learning framework, in which an upper bound of the probability of Bayes decision errors is derived for different forms of hash functions and a rigorous proof of the convergence of the upper bound is presented. Consequently, minimizing such an upper bound leads to consistent performance improvements of existing hash code learning algorithms, regardless of whether original algorithms are unsupervised or supervised. This paper also illustrates a fast hash coding method that exploits simple binary tests to achieve orders of magnitude improvement in coding speed as compared to projection based methods.
Keywords :
Bayes methods; binary codes; decision theory; error statistics; file organisation; unsupervised learning; Bayes decision error probability; Jensen Shannon divergence; binary tests; hash functions; projection based methods; statistical learning framework; supervised binary hash code learning algorithm; unsupervised learning; upper bound; Binary codes; Convergence; Linear programming; Nickel; Training; Upper bound; Vectors; Jensen Shannon Divergence; approximate nearest neighbor search; binary code; indexing; matching; randomized tree;
Conference_Titel :
Computer Vision (ICCV), 2013 IEEE International Conference on
Conference_Location :
Sydney, VIC
DOI :
10.1109/ICCV.2013.325