DocumentCode
174651
Title
Accelerating divergent applications on SIMD architectures using neural networks
Author
Grigorian, B. ; Reinman, G.
Author_Institution
Comput. Sci. Dept., Univ. of California, Los Angeles, Los Angeles, CA, USA
fYear
2014
fDate
19-22 Oct. 2014
Firstpage
317
Lastpage
323
Abstract
In this work, we investigate neural-network-based solutions to the well-known problem of branch divergence in Single Instruction Multiple Data (SIMD) architectures. Our approach isolates code regions with performance degradation due to branch divergence, trains neural networks (NNs) offline to approximate these regions, and replaces the regions with their NN approximations. By directly manipulating source code, this platform-agnostic methodology translates control flow into non-divergent computation, trading-off precision for performance and energy gains. We present the Neuralizer (our automated software flow), and evaluate our approach on various divergent GPU applications, achieving average performance gains of 13.6× and energy savings of 14.8× with 96% accuracy.
Keywords
approximation theory; flow control; graphics processing units; neural nets; parallel processing; source code (software); GPU applications; NN approximations; SIMD architectures; accelerating divergent applications; branch divergence; degradation performance; energy gains; energy savings; flow control; neural-network-based solutions; neuralizer; nondivergent computation; platform-agnostic methodology; single instruction multiple data architectures; source code region isolation; trading-off precision; Approximation methods; Artificial neural networks; Benchmark testing; Graphics processing units; Kernel; Training; Approximate Computing; Branch Divergence; Hardware Acceleration; Neural Networks; SIMD;
fLanguage
English
Publisher
ieee
Conference_Titel
Computer Design (ICCD), 2014 32nd IEEE International Conference on
Conference_Location
Seoul
Type
conf
DOI
10.1109/ICCD.2014.6974700
Filename
6974700
Link To Document