Title :
A unified compressed memory hierarchy
Author :
Hallnor, Erik G. ; Reinhardt, Steven K.
Author_Institution :
Dept. of EECS, Michigan Univ., Dearborn, MI, USA
Abstract :
The memory system´s large and growing contribution to system performance motivates more aggressive approaches to improving its efficiency. We propose and analyze a memory hierarchy that uses a unified compression scheme encompassing the last-level on-chip cache, the off-chip memory channel, and off-chip main memory. This scheme simultaneously increases the effective on-chip cache capacity, off-chip bandwidth, and main memory size, while avoiding compression and decompression overheads between levels. Simulations of the SPEC CPU2000 benchmarks using a 1MB cache and 128-byte blocks show an average speedup of 19%, while degrading performance by no more than 5%. The combined scheme achieves a peak improvement of 292%, compared to 165% and 83% for cache or bus compression alone. The compressed system generally provides even better performance as the block size is increased to 512 bytes.
Keywords :
cache storage; data compression; memory architecture; 1 GByte; 128 byte; 512 bytes; SPEC CPU2000 benchmarks; compressed memory hierarchy; last-level on-chip cache; off-chip bandwidth; off-chip main memory; off-chip memory channel; system performance; unified compression scheme; Bandwidth; Computer architecture; Costs; Degradation; Delay; Economics; Frequency; Laboratories; Random access memory; System performance;
Conference_Titel :
High-Performance Computer Architecture, 2005. HPCA-11. 11th International Symposium on
Print_ISBN :
0-7695-2275-0
DOI :
10.1109/HPCA.2005.4