Title :
An efficient mapping of multilayer perceptron with backpropagation ANNs on hypercubes
Author :
Malluhi, Q.M. ; Bayoumi, M.A. ; Rao, T.R.N.
Author_Institution :
Center for Advanced Comput. Studies, Univ. of Southwestern Louisiana, Lafayette, LA, USA
Abstract :
This paper proposes a parallel structure, the mesh-of-appendixed-trees (MAT), for efficient implementation of artificial neural networks (ANNs). Algorithms to implement both the recall and the training phases of the multilayer perceptron and backpropagation ANN model are provided. A recursive procedure for embedding the MAT structure into the hypercube topology is used as the basis for an efficient mapping technique to map ANN computations on general purpose massively parallel hypercube systems. In addition, based on the mapping scheme, a fast special purpose parallel architecture for ANNs is developed. The major advantage of our technique is high performance. Unlike the other techniques presented in the literature which require O(N) time, where N is the size of the largest layer, our implementation requires only O(log N) time. Moreover, it allows the pipelining of more than one input pattern and thus further improves the performance
Keywords :
backpropagation; computational complexity; hypercube networks; multilayer perceptrons; neural nets; parallel architectures; artificial neural networks; backpropagation; hypercube topology; hypercubes; mapping; massively parallel hypercube systems; mesh-of-appendixed-trees; multilayer perceptron; parallel architecture; pipelining; recall; training; Artificial neural networks; Backpropagation algorithms; Computer architecture; Concurrent computing; Hypercubes; Multi-layer neural network; Multilayer perceptrons; Neural networks; Neurons; Pipeline processing;
Conference_Titel :
Parallel and Distributed Processing, 1993. Proceedings of the Fifth IEEE Symposium on
Conference_Location :
Dallas, TX
Print_ISBN :
0-8186-4222-X
DOI :
10.1109/SPDP.1993.395509