DocumentCode
797124
Title
Efficient mapping of ANNs on hypercube massively parallel machines
Author
Malluhi, Q.M. ; Bayoumi, Magdy A. ; Rao, T.R.N.
Author_Institution
Dept. of Comput. Sci., Jackson State Univ., MS, USA
Volume
44
Issue
6
fYear
1995
fDate
6/1/1995 12:00:00 AM
Firstpage
769
Lastpage
779
Abstract
This paper presents a technique for mapping artificial neural networks (ANNs) on hypercube massively parallel machines. The paper starts by synthesizing a parallel structure, the mesh-of-appendixed-trees (MAT), for fast ANN implementation. Then, it presents a recursive procedure to embed the MAT structure into the hypercube topology. This procedure is used as the basis for an efficient mapping of ANN computations on hypercube systems. Both the multilayer feedforward with backpropagation (FFBP) and the Hopfield ANN models are considered. Algorithms to implement the recall and the training phases of the FFBP model as well as the recall phase of the Hopfield model are provided. The major advantage of our technique is high performance. Unlike the other techniques presented in the literature which require O(n) time, where N is the size of the largest layer, our implementation requires only O(log N) time. Moreover, it allows pipelining of more than one input pattern and thus further improves the performance
Keywords
backpropagation; feedforward neural nets; hypercube networks; parallel machines; Hopfield ANN models; artificial neural networks; efficient mapping; hypercube massively parallel machines; mesh-of-appendixed-trees; multilayer feedforward with backpropagation; parallel structure; pipelining; Artificial neural networks; Computational modeling; Hypercubes; Neural networks; Neurons; Nonhomogeneous media; Parallel architectures; Parallel machines; Parallel processing; Very large scale integration;
fLanguage
English
Journal_Title
Computers, IEEE Transactions on
Publisher
ieee
ISSN
0018-9340
Type
jour
DOI
10.1109/12.391184
Filename
391184
Link To Document