Title :
Scalable energy-efficient, low-latency implementations of trained spiking Deep Belief Networks on SpiNNaker
Author :
Evangelos Stromatias;Daniel Neil;Francesco Galluppi;Michael Pfeiffer;Shih-Chii Liu;Steve Furber
Author_Institution :
Advanced Processor Technologies Group, School of Computer Science, University of Manchester, M13 9PL, United Kingdom
fDate :
7/1/2015 12:00:00 AM
Abstract :
Deep neural networks have become the state-of-the-art approach for classification in machine learning, and Deep Belief Networks (DBNs) are one of its most successful representatives. DBNs consist of many neuron-like units, which are connected only to neurons in neighboring layers. Larger DBNs have been shown to perform better, but scaling-up poses problems for conventional CPUs, which calls for efficient implementations on parallel computing architectures, in particular reducing the communication overhead. In this context we introduce a realization of a spike-based variation of previously trained DBNs on the biologically-inspired parallel SpiNNaker platform. The DBN on SpiNNaker runs in real-time and achieves a classification performance of 95% on the MNIST handwritten digit dataset, which is only 0.06% less than that of a pure software implementation. Importantly, using a neurally-inspired architecture yields additional benefits: during network run-time on this task, the platform consumes only 0.3 W with classification latencies in the order of tens of milliseconds, making it suitable for implementing such networks on a mobile platform. The results in this paper also show how the power dissipation of the SpiNNaker platform and the classification latency of a network scales with the number of neurons and layers in the network and the overall spike activity rate.
Keywords :
"Neurons","Topology","MATLAB","Clocks"
Conference_Titel :
Neural Networks (IJCNN), 2015 International Joint Conference on
Electronic_ISBN :
2161-4407
DOI :
10.1109/IJCNN.2015.7280625