DocumentCode :
2669931
Title :
A sparse memory-access neural network engine with 96 parallel data-driven processing units
Author :
Aihara, K. ; Fujita, O. ; Uchimura, K.
Author_Institution :
NTT LSI Labs., Kanagawa, Japan
fYear :
1995
fDate :
15-17 Feb. 1995
Firstpage :
72
Lastpage :
73
Abstract :
New neural network operation schemes are necessary to produce high-performance neural network chips with a large-capacity synapse weight memory and a high computational speed. Digital chips using specific neural models that reduce neuron calculations have been proposed. In another digital chip, the calculation of negligibly small values is eliminated to improve computational speed which comes at the expense of calculation accuracy. A neuro-chip architecture, sparse memory-access (SMA), achieves high computational speed without an accuracy penalty. SMA architecture can be applied to multi-layered perceptron networks and uses two key techniques compressible synapse weight neuron calculation (CSNC) and differential neuron operation (DNO)-to reduce calculations and accesses to synapse weight memories.
Keywords :
content-addressable storage; memory architecture; multilayer perceptrons; neural chips; neural net architecture; parallel architectures; compressible synapse weight neuron calculation; computational speed; differential neuron operation; digital chips; multi-layered perceptron; neural models; neural network engine; neuro-chip architecture; parallel data-driven processing units; sparse memory-access; synapse weight memory; Accuracy; Computer architecture; Computer networks; Engines; Equations; Laboratories; Large scale integration; Neural networks; Neurons; Pattern recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Solid-State Circuits Conference, 1995. Digest of Technical Papers. 41st ISSCC, 1995 IEEE International
Conference_Location :
San Francisco, CA, USA
ISSN :
0193-6530
Print_ISBN :
0-7803-2495-1
Type :
conf
DOI :
10.1109/ISSCC.1995.535281
Filename :
535281
Link To Document :
بازگشت