Title :
Mapping of neural network models onto massively parallel hierarchical computer systems
Author_Institution :
Dept. of Comput. Sci., Eng. & Appl., Regional Eng. Coll., Orissa, India
Abstract :
Investigates the proposed implementation of neural networks on massively parallel hierarchical computer systems with hypernet topology. The proposed mapping scheme takes advantage of the inherent structure of hypernets to process multiple copies of the neural network in the different subnets, each executing a portion of the training set. Finally, the weight changes in all the subnets are accumulated to adjust the synaptic weights in all the copies. An expression is derived to estimate the time for all-to-all broadcasting, the principal mode of communication in implementing neural networks on parallel computers. This is later used to estimate the time required to execute various execution phases in the neural network algorithm, and thus to estimate the speedup performance of the hypernet in implementing neural networks
Keywords :
broadcasting; hierarchical systems; hypercube networks; learning (artificial intelligence); multilayer perceptrons; network topology; neural net architecture; parallel architectures; performance evaluation; all-to-all broadcasting; communication mode; execution phases; execution time estimation; hypernet topology; learning phase; massively parallel hierarchical computer systems; multilayer perceptron; multiple copies; neural network model mapping; parallel computers; recall phase; speedup performance; subnets; synaptic weights; training set; training set parallelism; weight changes; Computer networks; Concurrent computing; Feedforward systems; Neural networks;
Conference_Titel :
High-Performance Computing, 1997. Proceedings. Fourth International Conference on
Conference_Location :
Bangalore
Print_ISBN :
0-8186-8067-9
DOI :
10.1109/HIPC.1997.634468