Title :
A parallel processing VLSI BAM engine
Author :
Hasan, S. M Rezaul ; Siong, Ng Kang
Author_Institution :
VLSI Res. Lab., Universiti Sains Malaysia, Perak, Malaysia
fDate :
3/1/1997 12:00:00 AM
Abstract :
In this paper emerging parallel/distributed architectures are explored for the digital VLSI implementation of adaptive bidirectional associative memory (BAM) neural network. A single instruction stream many data stream (SIMD)-based parallel processing architecture, is developed for the adaptive BAM neural network, taking advantage of the inherent parallelism in BAM. This novel neural processor architecture is named the sliding feeder BAM array processor (SLiFBAM). The SLiFBAM processor can be viewed as a two-stroke neural processing engine, It has four operating modes: learn pattern, evaluate pattern, read weight, and write weight. Design of a SLiFBAM VLSI processor chip is also described. By using 2-μm scalable CMOS technology, a SLiFBAM processor chip with 4+4 neurons and eight modules of 256×5 bit local weight-storage SRAM, was integrated on a 6.9×7.4 mm2 prototype die. The system architecture is highly flexible and modular, enabling the construction of larger BAM networks of up to 252 neurons using multiple SLiFBAM chips
Keywords :
CMOS memory circuits; VLSI; associative processing; content-addressable storage; learning (artificial intelligence); neural chips; neural net architecture; 2 micron; BAM; CMOS; VLSI; bidirectional associative memory; evaluate pattern; learn pattern; neural network; parallel architectures; parallel processing; read weight; single instruction stream many data stream; sliding feeder BAM array processor; write weight; Adaptive systems; Associative memory; CMOS process; CMOS technology; Engines; Magnesium compounds; Neural networks; Neurons; Parallel processing; Very large scale integration;
Journal_Title :
Neural Networks, IEEE Transactions on