Title :
GPU-TLS: An Efficient Runtime for Speculative Loop Parallelization on GPUs
Author :
Chenggang Zhang ; Guodong Han ; Cho-Li Wang
Author_Institution :
Dept. of Comput. Sci., Univ. of Hong Kong, Hong Kong, China
Abstract :
Recently GPUs have risen as one important parallel platform for general purpose applications, both in HPC and cloud environments. Due to the special execution model, developing programs for GPUs is difficult even with the recent introduction of high-level languages like CUDA and OpenCL. To ease the programming efforts, some research has proposed automatically generating parallel GPU codes by complex compile-time techniques. However, this approach can only parallelize loops 100% free of inter-iteration dependencies (i.e., DOALL loops). To exploit runtime parallelism, which cannot be proven by static analysis, in this work, we propose GPU-TLS, a runtime system to speculatively parallelize possibly-parallel loops in sequential programs on GPUs. GPU-TLS parallelizes a possibly-parallel loop by chopping it into smaller sub-loops, each of which is executed in parallel by a GPU kernel, speculating that no inter-iteration dependencies exist. After dependency checking, the buffered writes of iterations without mis-speculations are copied to the master memory while iterations encountering mis-speculations are re-executed. GPU-TLS addresses several key problems of speculative loop parallelization on GPUs: (1) The larger mis-speculation rate caused by larger number of threads is reduced by three approaches: the loop chopping parallelization approach, the deferred memory update scheme and intra-warp value forwarding method. (2) The larger overhead of dependency checking is reduced by a hybrid scheme: eager intra-warp dependency checking combined with lazy inter-warp dependency checking. (3) The bottleneck of serial commit is alleviated by a parallel commit scheme, which allows different iterations to enter the commit phase out of order but still guarantees sequential semantics. Extensive evaluations using both micro benchmarks and real-life applications on two recent NVIDIA GPU cards show that speculative loop parallelization using GPU-TLS can achieve speedups ranging from 5 to- 160 for sequential programs with possibly-parallel loops.
Keywords :
cloud computing; graphics processing units; parallel processing; program compilers; GPU kernel; GPU-TLS; HPC; NVIDIA GPU cards; cloud environments; complex compile-time techniques; deferred memory update scheme; eager intra-warp dependency checking; general purpose applications; high-level languages; interiteration dependency; intra-warp value forwarding method; lazy inter-warp dependency checking; loop chopping parallelization approach; parallel GPU code generation; parallel commit scheme; parallel platform; runtime parallelism; sequential programs; sequential semantics; speculative loop parallelization; Arrays; Graphics processing units; Instruction sets; Kernel; Parallel processing; Runtime; GPGPU; GPU-TLS; Speculative Loop Parallelization; Thread-Level Speculation (TLS);
Conference_Titel :
Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM International Symposium on
Conference_Location :
Delft
Print_ISBN :
978-1-4673-6465-2
DOI :
10.1109/CCGrid.2013.34