Title :
A GPGPU-Based Acceleration of Fault-Tolerant MLP Learnings
Author :
Horita, Tadayoshi ; Takanami, Itsuo ; Akiba, Masahiro ; Terauchi, Mina ; Kanno, Tsuneo
Author_Institution :
Polytecnic Univ., Kodaira, Japan
Abstract :
A method to speed-up the learning processes proposed in [1] using the GPGPU technology is proposed. The method in [1] is called the deep learning method (Deep LM), and a multilayer perceptron (MLP) which has successfully trained using the Deep LM is tolerant to multiple weight and neuron faults where the weight faults are between the hidden and output layers, and the neuron faults are in the hidden layer. The core part of the Deep LM is the traditional back-propagation (BP) algorithm. However, it takes so much time to learn by the Deep LM because of its additional processes to realize the fault tolerance. Furthermore, there is an additional process such that SECDED code is used to detect or correct the neuron faults in the output layer. To cope with this, the hot-spots in using the Deep LM and the outline of their CUDA C source codes, including the auto-tuning process in the hot-spots, are shown. Next, on using the Deep LM, the computing speeds using a GPU are compared with those using a CPU for the concrete examples of character recognitions. Then, it is shown that the former is extremely accelerated.
Keywords :
backpropagation; graphics processing units; multilayer perceptrons; BP algorithm; Deep LM; GPGPU-based acceleration; back-propagation algorithm; deep learning method; fault-tolerant MLP learnings; multilayer perceptron; Acceleration; Fault tolerance; Fault tolerant systems; Graphics processing units; Indexes; Neurons; Programming; CUDA; GPGPU; GPU; fault-tolerance; multilayer perceptron;
Conference_Titel :
Embedded Multicore/Manycore SoCs (MCSoc), 2014 IEEE 8th International Symposium on
Conference_Location :
Aizu-Wakamatsu
DOI :
10.1109/MCSoC.2014.42