DocumentCode
3370037
Title
Analyzing CUDA workloads using a detailed GPU simulator
Author
Bakhoda, Ali ; Yuan, George L. ; Fung, Wilson W L ; Wong, Henry ; Aamodt, Tor M.
Author_Institution
Univ. of British Columbia, Vancouver, BC
fYear
2009
fDate
26-28 April 2009
Firstpage
163
Lastpage
174
Abstract
Modern graphic processing units (GPUs) provide sufficiently flexible programming models that understanding their performance can provide insight in designing tomorrow´s manycore processors, whether those are GPUs or otherwise. The combination of multiple, multithreaded, SIMD cores makes studying these GPUs useful in understanding tradeoffs among memory, data, and thread level parallelism. While modern GPUs offer orders of magnitude more raw computing power than contemporary CPUs, many important applications, even those with abundant data level parallelism, do not achieve peak performance. This paper characterizes several non-graphics applications written in NVIDIA´s CUDA programming model by running them on a novel detailed microarchitecture performance simulator that runs NVIDIA´s parallel thread execution (PTX) virtual instruction set. For this study, we selected twelve non-trivial CUDA applications demonstrating varying levels of performance improvement on GPU hardware (versus a CPU-only sequential version of the application). We study the performance of these applications on our GPU performance simulator with configurations comparable to contemporary high-end graphics cards. We characterize the performance impact of several microarchitecture design choices including choice of interconnect topology, use of caches, design of memory controller, parallel workload distribution mechanisms, and memory request coalescing hardware. Two observations we make are (1) that for the applications we study, performance is more sensitive to interconnect bisection bandwidth rather than latency, and (2) that, for some applications, running fewer threads concurrently than on-chip resources might otherwise allow can improve performance by reducing contention in the memory system.
Keywords
cache storage; computer graphic equipment; instruction sets; multi-threading; multiprocessing systems; parallel architectures; CUDA programming; CUDA workload; GPU hardware; GPU simulator; caches; flexible programming model; graphic processing unit; high-end graphics card; interconnect topology; memory controller; memory request coalescing hardware; microarchitecture design; microarchitecture performance simulator; parallel thread execution; parallel workload distribution; virtual instruction set; Analytical models; Computational modeling; Concurrent computing; Graphics; Hardware; Microarchitecture; Parallel processing; Parallel programming; Process design; Yarn;
fLanguage
English
Publisher
ieee
Conference_Titel
Performance Analysis of Systems and Software, 2009. ISPASS 2009. IEEE International Symposium on
Conference_Location
Boston, MA
Print_ISBN
978-1-4244-4184-6
Type
conf
DOI
10.1109/ISPASS.2009.4919648
Filename
4919648
Link To Document