DocumentCode :
3326955
Title :
Building expressive, area-efficient coherence directories
Author :
Mekkat, Vineeth ; Holey, Anup ; Pen-Chung Yew ; Zhai, Antonia
Author_Institution :
Dept. of Comput. Sci. & Eng., Univ. of Minnesota, Minneapolis, MN, USA
fYear :
2013
fDate :
7-11 Sept. 2013
Firstpage :
299
Lastpage :
308
Abstract :
Heterogeneous multicore processors that integrate CPU cores and data-parallel accelerators such as GPU cores onto the same die raise several new issues for sharing various on-chip resources. The shared last-level cache (LLC) is one of the most important shared resources due to its impact on performance. Accesses to the shared LLC in heterogeneous multicore processors can be dominated by the GPU due to the significantly higher number of threads supported. Under current cache management policies, the CPU applications´ share of the LLC can be significantly reduced in the presence of competing GPU applications. For cache sensitive CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can often tolerate increased memory access latency in the presence of LLC misses when there is sufficient thread-level parallelism. In this work, we propose Heterogeneous LLC Management (HeLM), a novel shared LLC management policy that takes advantage of the GPU´s tolerance for memory access latency. HeLM is able to throttle GPU LLC accesses and yield LLC space to cache sensitive CPU applications. GPU LLC access throttling is achieved by allowing GPU threads that can tolerate longer memory access latencies to bypass the LLC. The latency tolerance of a GPU application is determined by the availability of thread-level parallelism, which can be measured at runtime as the average number of threads that are available for issuing. Our heterogeneous LLC management scheme outperforms LRU policy by 12.5% and TAP-RRIP by 5.6% for a processor with 4 CPU and 4 GPU cores.
Keywords :
cache storage; graphics processing units; multi-threading; multiprocessing systems; resource allocation; CPU cores; GPU LLC access throttling; GPU application; GPU cores; GPU threads; GPU tolerance; HeLM; LLC misses; LRU policy; TAP-RRIP; cache management policies; cache sensitive CPU applications; data-parallel accelerators; heterogeneous LLC management; heterogeneous multicore processors; latency tolerance; memory access latency; on-chip resources sharing; shared LLC management policy; shared last-level cache management; thread-level parallelism; Benchmark testing; Graphics processing units; Instruction sets; Multicore processing; Runtime; Sensitivity; coherence; directory; scalability;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel Architectures and Compilation Techniques (PACT), 2013 22nd International Conference on
Conference_Location :
Edinburgh
ISSN :
1089-795X
Print_ISBN :
978-1-4799-1018-2
Type :
conf
DOI :
10.1109/PACT.2013.6618819
Filename :
6618819
Link To Document :
بازگشت