DocumentCode
2846292
Title
Accommodation of the Bandwidth of Large Cache Blocks Using Cache/Memory Link Compression
Author
Thuresson, Martin ; Stenstrom, Per
Author_Institution
Dept. of Comput. Sci. & Eng., Chalmers Univ. of Technol., Goteborg
fYear
2008
fDate
9-12 Sept. 2008
Firstpage
478
Lastpage
486
Abstract
The mismatch between processor and memory speed continues to make design issues for memory hierarchies important. While larger cache blocks can exploit more spatial locality, they increase the off-chip memory bandwidth; a scarce resource in future microprocessor designs. We show that it is possible to use larger block sizes without increasing the off-chip memory bandwidth by applying compression techniques to cache/memory block transfers. Since bandwidth is reduced by up to a factor of three, we propose to use larger blocks. While compression/decompression ends up on the critical memory access path, we find that its negative impact on the memory access latency time is often dwarfed by the performance gains from larger block sizes. Our proposed scheme uses a previous mechanism for dynamically choosing a larger cache block when advantageous given the spatial locality in combination with compression. This combined scheme consistently improves performance on average by 19%.
Keywords
cache storage; cache-memory link compression; compression-decompression ends; memory access latency time; microprocessor designs; off-chip memory bandwidth; processor-memory speed; Bandwidth; Computer architecture; Computer science; Delay; Design engineering; Encoding; Inhibitors; Microprocessors; Parallel processing; Performance gain; Computer architecture; data link compression; performance;
fLanguage
English
Publisher
ieee
Conference_Titel
Parallel Processing, 2008. ICPP '08. 37th International Conference on
Conference_Location
Portland, OR
ISSN
0190-3918
Print_ISBN
978-0-7695-3374-2
Electronic_ISBN
0190-3918
Type
conf
DOI
10.1109/ICPP.2008.47
Filename
4625884
Link To Document