Title :
A parallel implementation of the batch backpropagation training of neural networks
Author :
Novokhodko, Alexander ; Valentine, Scott
Author_Institution :
Dept. of Electr. & Comput. Eng., Missouri Univ., Rolla, MO, USA
Abstract :
Neural networks, being naturally parallel, inspire researchers to seek efficient implementations for various parallel architectures. However, the vast fine-grain parallelism of many tightly connected simple nodes poses a problem for traditional parallel computing on a small number of powerful processors. One approach is to parallelize not the neural network itself, but the process of its training, which is the most numerically intensive part in neural network computing. During the batch training each input pattern/signal is presented to the neural network, a response is obtained and evaluated, and a direction of network parameters change (the cost function gradient) is calculated using the backpropagation algorithm. The goal is obtained parallelizing MATLAB´s matrix multiplication routine. The message passing interface is used for the parallel implementation. The implementation allows to parallelize generally non-parallel procedures offered by MATLAB
Keywords :
backpropagation; batch processing (computers); matrix multiplication; message passing; multilayer perceptrons; neural net architecture; parallel processing; MATLAB; batch backpropagation; batch training; cost function gradient; matrix multiplication routine; message passing interface; multilayer perceptron; neural networks; parallel architectures; parallel processing; Backpropagation algorithms; Computer languages; Computer networks; Concurrent computing; Cost function; MATLAB; Message passing; Neural networks; Parallel architectures; Parallel processing;
Conference_Titel :
Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-7044-9
DOI :
10.1109/IJCNN.2001.938432