DocumentCode :
2946786
Title :
Thread block compaction for efficient SIMT control flow
Author :
Fung, Wilson W L ; Aamodt, Tor M.
Author_Institution :
Univ. of British Columbia, Vancouver, BC, Canada
fYear :
2011
fDate :
12-16 Feb. 2011
Firstpage :
25
Lastpage :
36
Abstract :
Manycore accelerators such as graphics processor units (GPUs) organize processing units into single-instruction, multiple data “cores” to improve throughput per unit hardware cost. Programming models for these accelerators encourage applications to run kernels with large groups of parallel scalar threads. The hardware groups these threads into warps/wavefronts and executes them in lockstep-dubbed single-instruction, multiple-thread (SIMT) by NVIDIA. While current GPUs employ a per-warp (or per-wavefront) stack to manage divergent control flow, it incurs decreased efficiency for applications with nested, data-dependent control flow. In this paper, we propose and evaluate the benefits of extending the sharing of resources in a block of warps, already used for scratchpad memory, to exploit control flow locality among threads (where such sharing may at first seem detrimental). In our proposal, warps within a thread block share a common block-wide stack for divergence handling. At a divergent branch, threads are compacted into new warps in hardware. Our simulation results show that this compaction mechanism provides an average speedup of 22% over a baseline per-warp, stack-based reconvergence mechanism, and 17% versus dynamic warp formation on a set of CUDA applications that suffer significantly from control flow divergence.
Keywords :
computer graphic equipment; coprocessors; multiprocessing systems; parallel architectures; CUDA applications; NVIDIA; SIMT control flow; data dependent control flow; divergence handling; graphics processor units; manycore accelerators; parallel scalar threads; single instruction multiple data cores; stack based reconvergence mechanism; thread block compaction; warps; wavefronts; Compaction; Graphics processing unit; Hardware; Instruction sets; Kernel; Pipelines; Random access memory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
High Performance Computer Architecture (HPCA), 2011 IEEE 17th International Symposium on
Conference_Location :
San Antonio, TX
ISSN :
1530-0897
Print_ISBN :
978-1-4244-9432-3
Type :
conf
DOI :
10.1109/HPCA.2011.5749714
Filename :
5749714
Link To Document :
بازگشت