DocumentCode :
3357478
Title :
Network caching for Chip Multiprocessors
Author :
Wang, Jinglei ; Xue, Yibo ; Wang, Haixia ; Wang, Dongsheng
Author_Institution :
Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China
fYear :
2009
fDate :
14-16 Dec. 2009
Firstpage :
341
Lastpage :
348
Abstract :
The large working sets of commercial and scientific workloads favor a shared L2 cache design that maximizes the aggregate cache capacity and minimizes off-chip memory requests in chip multiprocessors (CMP). There are two important hurdles that restrict the scalability of these chip multiprocessors: the on-chip memory cost of directory and the long L1 miss latencies. This work presents network caching architecture aimed at facing these two important problems. Network caching takes advantage of on-chip networks to manage shared data blocks and directory information in chip multiprocessors. The network caching architecture removes the directory structure from shared L2 caches and stores directory information for the blocks recently cached by L1 caches in the network interface components decreasing on-chip directory memory overhead and improves the scalability. The saved memory space is used as shared data caches or victim caches which are embedded into the network interface components to reduce L1 miss latencies further. This paper develops three network caching designs to reduce L1 miss latencies. The proposed architecture is evaluated based on simulations of a 16-core tiled CMP. First, we demonstrate that network caching architecture provides good scalability. Second, network caching architecture also provides robust performance. Third, different network caching designs have distinct impacts on performance of CMP. Against over the traditional shared L2 cache design, network victim cache (NVC) design improves performance by 23% on average, and up to 34% at best. Network shared cache (NSC) design provides performance improvement by 6% on average, and up to 16% at best. Network directory cache (NDC) design achieves performance improvement by 4% on average, and up to 11% at best.
Keywords :
cache storage; microprocessor chips; network-on-chip; L1 caches; chip multiprocessors; network caching architecture; network interface components; network on chip; network shared cache design; network victim cache; on-chip directory memory overhead; on-chip memory cost; shared L2 cache design; Aggregates; Computer science; Costs; Delay; Network interfaces; Network-on-a-chip; Protocols; Robustness; Scalability; Tiles; Chip Multiprocessors; Network on Chip; directory-based cache coherence;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Performance Computing and Communications Conference (IPCCC), 2009 IEEE 28th International
Conference_Location :
Scottsdale, AZ
ISSN :
1097-2641
Print_ISBN :
978-1-4244-5737-3
Type :
conf
DOI :
10.1109/PCCC.2009.5403830
Filename :
5403830
Link To Document :
بازگشت