Abstract :
A general-purpose neuro-chip for the resolution and learning stages of neural computing is described. Parallelism on input neurons is achieved for output neuron states evaluation and for synaptic weights updating. The latter follows a Hebb-like local formula. The digital realization guarantees exact integer calculations. According to simulations and theoretical considerations, 16-bit precision on the weights should yield enough accuracy for the foreseen applications. An arrangement of several chips and adapted programming can be used to emulate most of the formal neural nets. The architecture is able to implement most of the learning schemes, including the minimum overlap rule, Kohonen´s self-organizing map, and the error back-propagation. The chip is fully cascadable, allowing any size and architecture of neural nets. A typical on-chip running time for updating one output neuron state (resolution stage), or all synaptic weights related to one neuron (learning stage) is less than 250 ns for binary neurons in a Hopfield network. Larger networks can be built with several circuits, and with transputer microprocessors these chips could realize fine grain parallel machines that achieve computing times shorter by several orders of magnitude than those of conventional computers