Title :
Liquid State Machine With Dendritically Enhanced Readout for Low-Power, Neuromorphic VLSI Implementations
Author :
Roy, Sandip ; Banerjee, Adrish ; Basu, Anirban
Author_Institution :
Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
Abstract :
In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity (two compartment model). The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best `combination´ of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the `choice´ of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.
Keywords :
VLSI; neurophysiology; parallel architectures; perceptrons; readout electronics; AER protocols; LSM; NRW; address event representation; analog weights; binary synapses; biological neurons; class spike train classification problem; compartment model; dendrites; dendritically enhanced readout; hardware implementations; learning algorithm; liquid state machine; low-power VLSI implementations; lumped nonlinearity; network rewiring; neuro-inspired hardware-friendly readout stage; neuromorphic VLSI implementations; neuromorphic systems; nonlinear properties; p-delta algorithm; parallel perceptron architecture; parallel perceptrons; readout architecture; reservoir computing; single perceptron; statistical variations; structural plasticity; synaptic resources; Computer architecture; Hardware; Liquids; Neurons; Training; Vectors; Very large scale integration; Binary synapse; liquid state machine; neuromorphic engineering; nonlinear dendrite; readout; supervised learning;
Journal_Title :
Biomedical Circuits and Systems, IEEE Transactions on
DOI :
10.1109/TBCAS.2014.2362969