DocumentCode
5216
Title
Orchestrating Cache Management and Memory Scheduling for GPGPU Applications
Author
Shuai Mu ; Yandong Deng ; Yubei Chen ; Huaiming Li ; Jianming Pan ; Wenjun Zhang ; Zhihua Wang
Author_Institution
Inst. of Microelectron., Circuit & Syst., Beijing, China
Volume
22
Issue
8
fYear
2014
fDate
Aug. 2014
Firstpage
1803
Lastpage
1814
Abstract
Modern graphics processing units (GPUs) are delivering tremendous computing horsepower by running tens of thousands of threads concurrently. The massively parallel execution model has been effective to hide the long latency of off-chip memory accesses in graphics and other general computing applications exhibiting regular memory behaviors. With the fast-growing demand for general purpose computing on GPUs (GPGPU), GPU workloads are becoming highly diversified, and thus requiring a synergistic coordination of both computing and memory resources to unleash the computing power of GPUs. Accordingly, recent graphics processors begin to integrate an on-die level-2 (L2) cache. The huge number of threads on GPUs, however, poses significant challenges to L2 cache design. The experiments on a variety of GPGPU applications reveal that the L2 cache may or may not improve the overall performance depending on the characteristics of applications. In this paper, we propose efficient techniques to improve GPGPU performance by orchestrating both L2 cache and memory in a unified framework. The basic philosophy is to exploit the temporal locality among the massive number of concurrent memory requests and minimize the impact of memory divergence behaviors among simultaneously executed groups of threads. Our major contributions are twofold. First, a priority-based cache management is proposed to maximize the chance of frequently revisited data to be kept in the cache. Second, an effective memory scheduling is introduced to reorder memory requests in the memory controller according to the divergence behavior for reducing average waiting time of warps. Simulation results reveal that our techniques enhance the overall performance by 10% on average for memory intensive benchmarks, whereas the maximum gain can be up to 30%.
Keywords
cache storage; graphics processing units; scheduling; GPGPU applications; cache management; general purpose computing; graphics processing units; memory controller; memory scheduling; parallel execution model; Benchmark testing; Graphics processing units; Instruction sets; Memory management; Processor scheduling; Random access memory; Cache management; general purpose computing on graphics processing units (GPGPU); memory latency divergence; memory scheduling; priority; warp; warp.;
fLanguage
English
Journal_Title
Very Large Scale Integration (VLSI) Systems, IEEE Transactions on
Publisher
ieee
ISSN
1063-8210
Type
jour
DOI
10.1109/TVLSI.2013.2278025
Filename
6595566
Link To Document