DocumentCode :
3697311
Title :
What the brain tells us about the future of silicon
Author :
Jeff Hawkins
Author_Institution :
Numenta Inc., Redwood City, U.S.A
fYear :
2015
Firstpage :
1
Lastpage :
4
Abstract :
Many computer and semiconductor manufacturers are looking for new growth opportunities that are not based on traditional von-Neumann architectures. They are also concerned about the potential end of “Moore´s law”. This has led to an increased interest in artificial neural networks, and neuromorphic hardware that can support these systems. It is well known that the brain is power efficient and naturally fault tolerant. Therefore, much research is being done on how silicon can support neural architectures to achieve greater power efficiency and greater storage density. I will make two main arguments. 1) Neuron models need to support active dendrites and thousands of synapses Biological neurons have thousands of synapses which are arranged along dendrites. Dendrites are themselves active processing elements. However, lacking a theory of why neurons have active dendrites, almost all artificial neural networks, such as those used in deep learning, use artificial neurons without active dendrites and with unrealistically few synapses. We now know that active dendrites combined with sparse activations allow individual neurons to recognize hundreds of unique patterns [1], [2]. This enables neurons to learn sequences of patterns which is necessary sensory inference and behavior. Neuromorphic HW needs to accommodate these more complex neuron models. 2) Learning in neural tissue is achieved via rewiring, not synaptic weight change Almost all artificial neural networks are built upon the assumption that learning is achieved via changes in the strength of synapses. However, we now know that most of learning in the cortex occurs via the growth of new synapses [3]. Memory is a rewiring problem, not a storage problem. This has important consequences in the design of neuromorphic systems at the level of the synapse and at the systems level. I will present a theory of how biological neurons with active dendrites work together in large networks to do inference and prediction in sensory data streams [4], [5]. Such networks are naturally fault tolerant due to the mathematics of sparse representations [6]. I will argue that these networks will form the basis of machine intelligence. To build systems that are as capable as biological brains will require the creation of new HW architectures that support neurons with active dendrites and large-scale rewiring.
Keywords :
"Neurons","Neuromorphics","Computer architecture","Artificial neural networks","Brain models"
Publisher :
ieee
Conference_Titel :
Energy Efficient Electronic Systems (E3S), 2015 Fourth Berkeley Symposium on
Type :
conf
DOI :
10.1109/E3S.2015.7336786
Filename :
7336786
Link To Document :
بازگشت