Title :
The effects of quantization on high order function neural networks
Author :
Jiang, Minghu ; Gielen, Georges
Author_Institution :
ESAT-MICAS, Catholic Univ. of Leuven, Heverlee, Belgium
Abstract :
This paper paid attention to the combined effects of quantization and clipping on performance of high order function neural networks (HOFNN) for simpler and more reliable hardware implementation. We probed into how to make the effects of quantization as small as possible to ensure certain training and non-linear ability at a given standard. We established in theory and proved the relationships among bit resolution of inputs and outputs, training and quantization manners, network-order number and performance degradation in HOFNN, showing that (1) signal to noise ratio (SNR) decrease followed with the increasing number of orders in a fixed bits; (2) SNR increase followed with the increasing number of bits; (3) the amplifying factor of SNR through nonlinear neuron, which is always less than 1, is unrelated with quantization error. The experiments revealed that the number of orders in HOFNN is more sensitiv for performance in the low bits of quantization and the simulate results conform with our proposed theoretical analysis
Keywords :
neural nets; quantisation (signal); amplifying factor; bit resolution; clipping; high order function neural networks; performance degradation; quantization effects; reliable hardware implementation; training; Analytical models; Degradation; Feedforward neural networks; Neural network hardware; Neural networks; Neurons; Pattern recognition; Performance analysis; Quantization; Signal to noise ratio;
Conference_Titel :
Neural Networks for Signal Processing XI, 2001. Proceedings of the 2001 IEEE Signal Processing Society Workshop
Conference_Location :
North Falmouth, MA
Print_ISBN :
0-7803-7196-8
DOI :
10.1109/NNSP.2001.943119