• DocumentCode
    850459
  • Title

    Implementation of neural networks on massive memory organizations

  • Author

    Misra, Manavendra ; Prasanna, Viktor K.

  • Author_Institution
    Dept. of Electr. Eng.-Syst., Univ. of Southern California, Los Angeles, CA, USA
  • Volume
    39
  • Issue
    7
  • fYear
    1992
  • fDate
    7/1/1992 12:00:00 AM
  • Firstpage
    476
  • Lastpage
    480
  • Abstract
    Simulations of artificial neural networks (ANNs) on serial machines have proved to be too slow to be of practical significance. It was realized that parallel machines would have to be used to exploit the inherent parallelism in these models. The SIMD architecture presented has n PEs and n2 memory modules arranged in an n×n array. This massive memory is used to store the weights of the neural network being simulated. It is shown how networks with sparse connectivity among neurons can be simulated in O((n+e)1/2) time, where n is the number of neurons and e the number of interconnections in the network. Preprocessing is carried out on the connection matrix of the sparse network, resulting in data movement that has an optimal asymptotic time complexity and a small constant factor
  • Keywords
    memory architecture; neural nets; parallel architectures; virtual machines; SIMD architecture; asymptotic time complexity; connection matrix; constant factor; data movement; massive memory organizations; memory modules; neural networks; parallel machines; preprocessing; sparse connectivity; sparse network; Artificial neural networks; Biological neural networks; Biology computing; Computational modeling; Computer networks; Neural networks; Neurons; Parallel machines; Parallel processing; Sparse matrices;
  • fLanguage
    English
  • Journal_Title
    Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1057-7130
  • Type

    jour

  • DOI
    10.1109/82.160171
  • Filename
    160171