• DocumentCode
    166202
  • Title

    Understanding the tradeoffs between software-managed vs. hardware-managed caches in GPUs

  • Author

    Chao Li ; Yi Yang ; Hongwen Dai ; Shengen Yan ; Mueller, Frank ; Huiyang Zhou

  • Author_Institution
    Dept. of Electr. & Comput. Eng., North Carolina State Univ., Raleigh, NC, USA
  • fYear
    2014
  • fDate
    23-25 March 2014
  • Firstpage
    231
  • Lastpage
    242
  • Abstract
    On-chip caches are commonly used in computer systems to hide long off-chip memory access latencies. To manage on-chip caches, either software-managed or hardware-managed schemes can be employed. State-of-art accelerators, such as the NVIDIA Fermi or Kepler GPUs and Intel´s forthcoming MIC “Knights Landing” (KNL), support both software-managed caches, aka. shared memory (GPUs) or near memory (KNL), and hardware-managed L1 data caches (D-caches). Furthermore, shared memory and the L1 D-cache on a GPU utilize the same physical storage and their capacity can be configured at runtime (same for KNL). In this paper, we present an in-depth study to reveal interesting and sometimes unexpected tradeoffs between shared memory and the hardware-managed L1 D- caches in GPU architecture. In our study, the kernels utilizing the L1 D-caches are generated from those leveraging shared memory to ensure that the same optimizations such as tiling are applied equally in both versions. Our detailed analyses reveal that rather than cache hit rates, the following tradeoffs often have more profound performance impacts. On one hand, the kernels utilizing the L1 caches may support higher degrees of thread-level parallelism, offer more opportunities for data to be allocated in registers, and sometimes result in lower dynamic instruction counts. On the other hand, the applications utilizing shared memory enable more coalesced accesses and tend to achieve higher degrees of memory-level parallelism. Overall, our results show that most benchmarks perform significantly better with shared memory than the L1 D-caches due to the high impact of memory-level parallelism and memory coalescing.
  • Keywords
    cache storage; graphics processing units; multi-threading; shared memory systems; GPU architecture; Intel MIC Knights Landing; Kepler GPUs; L1 D-caches; NVIDIA Fermi; accelerators; cache hit rates; data allocation; dynamic instruction counts; hardware-managed L1 data caches; hardware-managed caches; memory coalescing; memory-level parallelism; near memory; off-chip memory access latencies; on-chip caches; registers; shared memory; software-managed caches; thread-level parallelism; Computer architecture; Graphics processing units; Kernel; Parallel processing; Prefetching; Tiles;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Performance Analysis of Systems and Software (ISPASS), 2014 IEEE International Symposium on
  • Conference_Location
    Monterey, CA
  • Print_ISBN
    978-1-4799-3604-5
  • Type

    conf

  • DOI
    10.1109/ISPASS.2014.6844487
  • Filename
    6844487