• DocumentCode
    288304
  • Title

    High speed parallel hardware performance issues for neural network applications

  • Author

    Means, Robert W.

  • Author_Institution
    HNC Inc., San Diego, CA, USA
  • Volume
    1
  • fYear
    1994
  • fDate
    27 Jun-2 Jul 1994
  • Firstpage
    10
  • Abstract
    Neural network applications push the envelope of high speed computers in several areas. Pattern recognition applications often involve large amounts of training data and large neural networks. Significant preprocessing of the data is often necessary and real time operation is the ultimate goal. Each one of these steps stress different components of the computer system architecture. Since neural network architecture is inherently parallel, efficient mapping of the computations required for learning and classification should be very well suited for fast, high speed parallel computers. Two of the most used neural networks, multilayer backpropagation networks and competitive learning (Kohonen layer) neural networks, are examined and analyzed for parallel implementation. The common mathematical thread observed in the training and use of these networks is that the algorithms use standard linear algebra functions that operate on vectors and matrices. This common thread is observed also in radial basis function networks and probabilistic neural networks. The implications of this observation for parallel computer architecture are presented
  • Keywords
    backpropagation; multilayer perceptrons; neural net architecture; parallel architectures; self-organising feature maps; Kohonen layer neural networks; competitive learning neural networks; high-speed parallel hardware performance issues; large neural networks; linear algebra functions; multilayer backpropagation networks; neural network applications; pattern recognition applications; preprocessing; probabilistic neural networks; radial basis function networks; training data; Application software; Computer architecture; Computer networks; Concurrent computing; Multi-layer neural network; Neural network hardware; Neural networks; Pattern recognition; Training data; Yarn;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
  • Conference_Location
    Orlando, FL
  • Print_ISBN
    0-7803-1901-X
  • Type

    conf

  • DOI
    10.1109/ICNN.1994.374130
  • Filename
    374130