• DocumentCode
    288641
  • Title

    Reducing the effect of quantization by weight scaling

  • Author

    Withagen, Heini

  • Author_Institution
    Dept. of Electr. Eng., Eindhoven Univ. of Technol., Netherlands
  • Volume
    4
  • fYear
    1994
  • fDate
    27 Jun-2 Jul 1994
  • Firstpage
    2128
  • Abstract
    By a statistical analysis of the behaviour of feedforward neural networks to errors in the weights, we show that an optimal scaling factor for the weights exists when the number of inputs to a neuron increases. When this scaling technique is used, the error in the output of a neuron due to quantization errors is not influenced by the size of the network anymore. This technique is especially interesting for the implementation of neural networks using analog electronics
  • Keywords
    feedforward neural nets; roundoff errors; statistical analysis; feedforward neural networks; optimal scaling factor; quantization errors; statistical analysis; weight scaling; Artificial neural networks; Feedforward neural networks; Multi-layer neural network; Neural network hardware; Neural networks; Neurons; Quantization; Signal to noise ratio; Statistical analysis; Stochastic processes;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
  • Conference_Location
    Orlando, FL
  • Print_ISBN
    0-7803-1901-X
  • Type

    conf

  • DOI
    10.1109/ICNN.1994.374544
  • Filename
    374544