DocumentCode :
2699624
Title :
Neural network number systems
Author :
Brown, Harold K. ; Cross, Donald D. ; Whittaker, Alan G.
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
903
Abstract :
Three fundamental representation schemes for numbers in a digital neural network are explored: the fixed-point number, the floating-point number, and the exponential number. These three numeric representation schemes are analyzed with emphasis on the memory efficiency, precision, and dynamic-range tradeoffs associated with each when used to compute neural network vector dot products. Specifically, the authors explore a small image-processing problem, an 8×8-pixel image with 256 shades of resolution, to investigate the effects of using these various number formats on the total required memory in a neural network. It is concluded that, by carefully matching number formats to the precision and dynamic-range requirements of each layer in a neural network, one can optimize the memory utilization for the particular class of problem involved. Because it is impractical to design and build hardware for each particular problem to be solved with a neural network, the authors emphasize the importance of building neural network hardware which can handle heterogeneous number formats, dynamically programmable from software
Keywords :
digital arithmetic; neural nets; optimisation; picture processing; digital neural network; exponential number; fixed-point number; floating-point number; heterogeneous number formats; image-processing problem; memory utilization; neural network number systems; representation schemes; vector dot products;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137949
Filename :
5726906
Link To Document :
بازگشت