• DocumentCode
    883138
  • Title

    Finite precision error analysis of neural network hardware implementations

  • Author

    Holi, J.L. ; Hwang, Jenq-Neng

  • Author_Institution
    Adaptive Solutions Inc., Beaverton, OR, USA
  • Volume
    42
  • Issue
    3
  • fYear
    1993
  • fDate
    3/1/1993 12:00:00 AM
  • Firstpage
    281
  • Lastpage
    290
  • Abstract
    Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated
  • Keywords
    error analysis; feedforward neural nets; neural chips; back-propagation learning; finite precision computation; forward retrieving; low precision; multilayer perceptron; neural network algorithms; neural network hardware; parallel processing; silicon area; system cost; Algorithm design and analysis; Computer networks; Concurrent computing; Costs; Engines; Error analysis; Neural network hardware; Neural networks; Parallel processing; Silicon;
  • fLanguage
    English
  • Journal_Title
    Computers, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9340
  • Type

    jour

  • DOI
    10.1109/12.210171
  • Filename
    210171