DocumentCode :
695226
Title :
Exploiting compressed block size as an indicator of future reuse
Author :
Pekhimenko, Gennady ; Huberty, Tyler ; Rui Cai ; Mutlu, Onur ; Gibbons, Phillip B. ; Kozuch, Michael A. ; Mowry, Todd C.
fYear :
2015
fDate :
7-11 Feb. 2015
Firstpage :
51
Lastpage :
63
Abstract :
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that employ data compression. Our management policies are based on two key ideas. First, we show that it is possible to build a more efficient management policy for compressed caches if the compressed block size is directly used in calculating the value (importance) of a block to the cache. This leads to Minimal-Value Eviction (MVE), a policy that evicts the cache blocks with the least value, based on both the size and the expected future reuse. Second, we show that, in some cases, compressed block size can be used as an efficient indicator of the future reuse of a cache block. We use this idea to build a new insertion policy called Size-based Insertion Policy (SIP) that dynamically prioritizes cache blocks using their compressed size as an indicator. We compare CAMP (and its global variant G-CAMP) to prior on-chip cache management policies (both size-oblivious and size-aware) and find that our mechanisms are more effective in using compressed block size as an extra dimension in cache management decisions. Our results show that the proposed management policies (i) decrease off-chip bandwidth consumption (by 8.7% in single-core), (ii) decrease memory subsystem energy consumption (by 7.2% in single-core) for memory intensive workloads compared to the best prior mechanism, and (iii) improve performance (by 4.9%/9.0%/10.2% on average in single-/two-/four-core workload evaluations and up to 20.1%) CAMP is effective for a variety of compression algorithms and different cache designs with local and global replacement strategies.
Keywords :
cache storage; data compression; power aware computing; CAMP; MVE; SIP; compressed block size; compressed caches; compression-aware management policies; data compression; future cache block reuse; memory subsystem energy consumption; minimal-value eviction; off-chip bandwidth consumption; on-chip cache management policies; size-based insertion policy; Arrays; Bandwidth; Compression algorithms; Memory management; Radiation detectors; System-on-chip;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
High Performance Computer Architecture (HPCA), 2015 IEEE 21st International Symposium on
Conference_Location :
Burlingame, CA
Type :
conf
DOI :
10.1109/HPCA.2015.7056021
Filename :
7056021
Link To Document :
بازگشت