DocumentCode :
62971
Title :
A High-Throughput Neural Network Accelerator
Author :
Tianshi Chen ; Zidong Du ; Ninghui Sun ; Jia Wang ; Chengyong Wu ; Yunji Chen ; Temam, Olivier
Volume :
35
Issue :
3
fYear :
2015
fDate :
May-June 2015
Firstpage :
24
Lastpage :
32
Abstract :
The authors designed an accelerator architecture for large-scale neural networks, with an emphasis on the impact of memory on accelerator design, performance, and energy. In this article, they present a concrete design at 65 nm that can perform 496 16-bit fixed-point operations in parallel every 1.02 ns, that is, 452 gop/s, in a 3.02mm2, 485-mw footprint (excluding main memory accesses).
Keywords :
neural nets; accelerator architecture; accelerator design; high-throughput neural network accelerator; size 65 nm; Accelerators; Artificial neural networks; Computer architecture; Graphics processing units; Machine learning; Market research; Neural networks; hardware accelerator; machine learning; neural network;
fLanguage :
English
Journal_Title :
Micro, IEEE
Publisher :
ieee
ISSN :
0272-1732
Type :
jour
DOI :
10.1109/MM.2015.41
Filename :
7106400
Link To Document :
بازگشت