• DocumentCode
    2089787
  • Title

    Error analysis of quantized weights for feedforward neural networks (FNN)

  • Author

    Wu, Duanpei ; Gowdy, J.N.

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Clemson Univ., SC, USA
  • fYear
    1994
  • fDate
    10-13 Apr 1994
  • Firstpage
    475
  • Lastpage
    479
  • Abstract
    When a neural network is implemented with limited precision hardware, errors from the quantization of weights become important factors to be considered. In this paper, the authors present several analysis results based on general FNN structures and use several examples to examine the relation between weight errors and output classifications. A lower bound for L, the number of bits used to quantize the weights, is derived in the worst case. This paper also includes the detailed analysis of AND-gates
  • Keywords
    error analysis; feedforward neural nets; error analysis; feedforward neural networks; lower bound; output classifications; quantized weights; weight errors; worst case; Computer errors; Computer networks; Equations; Error analysis; Feedforward neural networks; Large-scale systems; Neural network hardware; Neural networks; Quantization; Training data;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Southeastcon '94. Creative Technology Transfer - A Global Affair., Proceedings of the 1994 IEEE
  • Conference_Location
    Miami, FL
  • Print_ISBN
    0-7803-1797-1
  • Type

    conf

  • DOI
    10.1109/SECON.1994.324361
  • Filename
    324361