DocumentCode :
303242
Title :
Experiments on estimating random mapping
Author :
Ho, K.M. ; Wang, C.J.
Author_Institution :
Dept. of Comput. Sci., Essex Univ., Colchester, UK
Volume :
1
fYear :
1996
fDate :
3-6 Jun 1996
Firstpage :
377
Abstract :
The generic function of a feedforward multilayer perceptron (MLP) network is to map patterns from one space to another. This mapping function, determined by the set of examples used to train the network, may be viewed as a hash function. This paper reports the experiments on using a backpropagation MLP network with a dynamic hidden layer to estimate a random mapping from the input to output space and use this estimated mapping as a hash function for a given population of keys. Comparative studies show that the MLP estimated hash functions performs robustly over various population of scarce hash keys which would cause uneven distributions with some traditional hash function
Keywords :
backpropagation; feedforward neural nets; functional analysis; multilayer perceptrons; Neuro-Hasher; backpropagation; dynamic hidden layer; feedforward network; generic function; hash function; multilayer perceptron; random mapping; Acceleration; Backpropagation algorithms; Binary codes; Computer science; Feedforward neural networks; Multilayer perceptrons; Neural networks; Robustness; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
Type :
conf
DOI :
10.1109/ICNN.1996.548921
Filename :
548921
Link To Document :
بازگشت