Title :
DRAM-page based prediction and prefetching
Author :
Yu, Haifeng ; Kedem, Gershon
Author_Institution :
Dept. of Comput. Sci., Duke Univ., Durham, NC, USA
Abstract :
This paper describes and evaluates DRAM-page based cache-line prediction and prefetching architecture. The scheme takes DRAM access timing into consideration in order to reduce prefetching overhead, amortizing the high cost of DRAM access by fetching two cache lines that reside on the same DRAM-page in a single access. On each DRAM access, one or two cache blocks may be prefetched. We combine three prediction mechanisms: history mechanism, stride, and one block lookahead, make them DRAM page sensitive and deploy them in an effective adaptive prefetching strategy. Our simulation shows that the prefetch mechanism can greatly improve system performance. Using a 32-KB prediction table cache, the prefetching scheme improves performance by 26%-55% on average over a baseline configuration, depending on the memory model. Moreover, the simulation shows that prefetching is more cost-effective than simply increasing L2-cache size or using a one block lookahead prefetching scheme. Simulation results also show that DRAM-page based prefetching yields higher relative performance as processors get faster, making the prefetching scheme more attractive for next generation processors
Keywords :
DRAM chips; circuit simulation; memory architecture; performance evaluation; timing; 32-KB prediction table cache; DRAM access timing; DRAM-page based prediction; block lookahead; cache-line prediction; history mechanism; memory model; next generation processors; prefetching; simulation results; stride; system performance; Computer architecture; Computer science; Costs; Delay; History; Magnetic heads; Prefetching; Random access memory; System performance; Timing;
Conference_Titel :
Computer Design, 2000. Proceedings. 2000 International Conference on
Conference_Location :
Austin, TX
Print_ISBN :
0-7695-0801-4
DOI :
10.1109/ICCD.2000.878296