Title :
Deterministic Boltzmann machine VLSI can be scaled using multi-chip modules
Author :
Murray, Michael ; Burr, James B. ; Stork, David G. ; Leung, Ming-Tak ; Boonyanit, Kan ; Wolff, Gregory J. ; Peterson, Allen M.
Author_Institution :
Dept. of Electr. Eng., Stanford Univ., CA, USA
Abstract :
Describes a special purpose, very high speed, digital deterministic Boltzmann neural network VLSI chip. Each chip has 32 physical neural processors, which can be apportioned into an arbitrary topology (input, multiple hidden and output layers) of up to 160 virtual neurons total. Under typical conditions, the chip learns at approximately 5×108 connection updates/second (CUPS). Through relatively minor (subsequent) modifications, the authors´ chips can be `tiled´ in multi-chip modules, to make multi-layer networks of arbitrary size suffering only slight communications delays and overhead. In this way, the number of CUPS can be made arbitrarily large, limited only by the number of chips tiled. The chip´s high speed is due to massively parallel array computation of the inner products of connection weights and neural activations, limited (but adequate) precision for weights and activations (5 bits), high clock rate (180 MHz), as well as several algorithmic and design insights
Keywords :
Boltzmann machines; VLSI; backpropagation; microprocessor chips; optical neural nets; arbitrary topology; communications delays; connection weights; deterministic Boltzmann machine VLSI; massively parallel array computation; multichip modules; neural activations; neural network VLSI chip; Backpropagation algorithms; Machine learning; Machine learning algorithms; Neural networks; Neurons; Scheduling algorithm; Simulated annealing; Stochastic processes; Temperature; Very large scale integration;
Conference_Titel :
Application Specific Array Processors, 1992. Proceedings of the International Conference on
Conference_Location :
Berkeley, CA
Print_ISBN :
0-8186-2967-3
DOI :
10.1109/ASAP.1992.218571