DocumentCode :
2931645
Title :
Controlling the Power and Area of Neural Branch Predictors for Practical Implementation in High-Performance Processors
Author :
Jiménez, Daniel A. ; Loh, Gabriel H.
Author_Institution :
Dept. of Comput. Sci., Rutgers Univ., Newark, NJ
fYear :
2006
fDate :
Oct. 2006
Firstpage :
55
Lastpage :
62
Abstract :
Neural-inspired branch predictors achieve very low branch misprediction rates. However, previously proposed implementations have a variety of characteristics that make them challenging to implement in future high-performance processors. In particular, the original perceptron branch predictor suffers from a long access latency, and the faster path-based neural predictor (PBNP) requires deep pipelining and additional area to support checkpointing for mis-prediction recovery. The complexity of the PBNP predictor stems from the fact that the path history length, which determines the number of tables and pipeline stages, is equal to the history length, which is typically very long for high accuracy. We propose to decouple the path-history length from the outcome-history length through a new technique called modulo-path history. By allowing a shorter path history, we can implement a PBNP with significantly fewer tables and pipeline stages while still exploiting a traditional long branch outcome history. The pipeline length reduction results in decreased power and implementation complexity. We also propose folded modulo-path history to allow the number of pipeline stages to differ from the path history length. We show that our modulo-path PBNP at 8KB can achieve prediction accuracy and overall performance within 0.8% (SPECint) of the original PBNP while simultaneously reducing predictor energy consumption by ~29% per access and predictor die area by ~35%. Our folded modulo-path history PBNP achieves performance within 1.3% of ideal, with a ~37% energy reduction and ~36% predictor area reduction
Keywords :
neural net architecture; parallel architectures; pipeline processing; folded modulo-path history; high-performance processors; neural branch predictors; outcome-history length; path-based neural predictor; path-history length; pipeline length reduction; Accuracy; Checkpointing; Computer science; Delay; Educational institutions; Energy consumption; History; Machine learning algorithms; Pipeline processing; Random access memory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Architecture and High Performance Computing, 2006. SBAC-PAD '06. 18TH International Symposium on
Conference_Location :
Ouro Preto
ISSN :
1550-6533
Print_ISBN :
0-7695-2704-3
Type :
conf
DOI :
10.1109/SBAC-PAD.2006.14
Filename :
4032416
Link To Document :
بازگشت