Title :
Experiments on estimating random mapping
Author :
Ho, K.M. ; Wang, C.J.
Author_Institution :
Dept. of Comput. Sci., Essex Univ., Colchester, UK
Abstract :
The generic function of a feedforward multilayer perceptron (MLP) network is to map patterns from one space to another. This mapping function, determined by the set of examples used to train the network, may be viewed as a hash function. This paper reports the experiments on using a backpropagation MLP network with a dynamic hidden layer to estimate a random mapping from the input to output space and use this estimated mapping as a hash function for a given population of keys. Comparative studies show that the MLP estimated hash functions performs robustly over various population of scarce hash keys which would cause uneven distributions with some traditional hash function
Keywords :
backpropagation; feedforward neural nets; functional analysis; multilayer perceptrons; Neuro-Hasher; backpropagation; dynamic hidden layer; feedforward network; generic function; hash function; multilayer perceptron; random mapping; Acceleration; Backpropagation algorithms; Binary codes; Computer science; Feedforward neural networks; Multilayer perceptrons; Neural networks; Robustness; Testing;
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
DOI :
10.1109/ICNN.1996.548921