Title :
Customizing parallel formulations of backpropagation learning algorithm to neural network architectures: a summary of result
Author :
Amin, Minesh B. ; Shekhar, Shashi
Author_Institution :
Dept. of Comput. Sci., Minnesota Univ., Minneapolis, MN, USA
Abstract :
Several generic parallel formulations of the backpropagation learning algorithm have been proposed recently. Further speedups are possible by customizing parallel formulations to the architecture of the neural network. The paper addresses the issue of customizing parallel formulations of the backpropagation learning algorithm to a given neural network architecture on multiprocessors with hypercube-like communication topology. We introduce a new parallel formulation called rectangular checkerboarding which adapts to the network architecture and can provide performance gains for non-uniform neural networks, where the number of nodes vary across the layers. Algebraic analysis shows that each instance of rectangular checkerboarding (using a specific rectangular processor grid) is optimal for an important family of network architectures. Experiments on CM-5 show that customizing to network architecture can provide significant (~50%) performance gains for many interesting non-uniform neural network architectures, which are currently used in important applications. We also introduce the staircase framework, which can use different processor grids for different layers of a neural network
Keywords :
backpropagation; hypercube networks; multiprocessing systems; neural net architecture; symbol manipulation; CM-5; algebraic analysis; backpropagation learning algorithm; generic parallel formulations; hypercube-like communication topology; multiprocessors; neural network architectures; nonuniform neural networks; parallel formulation customisation; performance gains; rectangular checkerboarding; rectangular processor grid; speedups; staircase framework; Backpropagation algorithms; Broadcasting; Computer architecture; Computer science; Concurrent computing; Hypercubes; Neural networks; Partitioning algorithms; Performance gain; Shape;
Conference_Titel :
Tools with Artificial Intelligence, 1994. Proceedings., Sixth International Conference on
Conference_Location :
New Orleans, LA
Print_ISBN :
0-8186-6785-0
DOI :
10.1109/TAI.1994.346497