• DocumentCode
    856013
  • Title

    Tolerating Cache-Miss Latency with Multipass Pipelines

  • Author

    Barnes, Ronald D. ; Ryoo, Shane ; Hwu, Wen-Mei W.

  • Author_Institution
    George Mason Univ., Fairfax, VA
  • Volume
    26
  • Issue
    1
  • fYear
    2006
  • Firstpage
    40
  • Lastpage
    47
  • Abstract
    Microprocessors exploit instruction-level parallelism and tolerate memory-access latencies to achieve high-performance. Out-of-order microprocessors do this by dynamically scheduling instruction execution, but require power-hungry hardware structures. This article describes multipass pipelining, a microarchitectural model that provides an alternative to out-of-order execution for tolerating memory access latencies. We call our approach "flea-flicker" multipass pipelining because it uses two (or more) passes of preexecution or execution to achieve performance efficacy. Multipass pipelining assumes compile-time scheduling for lower-power and lower-complexity exploitation of instruction-level parallelism
  • Keywords
    cache storage; dynamic scheduling; fault tolerant computing; instruction sets; pipeline processing; cache-miss latency tolerance; compile-time scheduling; flea-flicker multipass pipelining; instruction execution dynamic scheduling; instruction-level parallelism; memory-access latency tolerance; microarchitectural model; out-of-order microprocessors; power-hungry hardware structures; Delay; Dynamic scheduling; Hardware; Microprocessors; Pipeline processing; Processor scheduling; Random access memory; Registers; Runtime; Sun; Flea-flicker; in-order design; memory-latency tolerance; multipass pipelining;
  • fLanguage
    English
  • Journal_Title
    Micro, IEEE
  • Publisher
    ieee
  • ISSN
    0272-1732
  • Type

    jour

  • DOI
    10.1109/MM.2006.25
  • Filename
    1603496