Title :
Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks
Author :
Jabri, Marwan ; Flower, Barry
Author_Institution :
Sch. of Electr. Eng., Sydney Univ., NSW, Australia
fDate :
1/1/1992 12:00:00 AM
Abstract :
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called `weight perturbation´) is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented
Keywords :
VLSI; learning systems; neural nets; perturbation theory; XOR network; analog VLSI implementation; direct approximation; feedforward networks; gradient descent; learning technique; multilayer perceptrons; optimal architecture; parallel analog implementations; recurrent multilayer networks; weight perturbation; Analog computers; Finite difference methods; Hardware; Multilayer perceptrons; Network-on-a-chip; Neurons; Nonhomogeneous media; Power generation economics; Very large scale integration; Wires;
Journal_Title :
Neural Networks, IEEE Transactions on