Title :
Learning architectures with enhanced capabilities and easier training
Author :
Bogdan M. Wilamowski;Janusz Korniak
Author_Institution :
Auburn University, AL, USA
Abstract :
Although discovery of the Error Back Propagation (EBP) learning algorithm was a real breakthrough, this is not only a very slow algorithm, but it also is not capable of training networks with super compact architecture. The most noticeable progress was done with an adaptation of the LM algorithm to neural network training. The LM algorithm is capable of training networks with 100 to 1000 fewer iterations, but the size of the problems are significantly limited. Also, the LM algorithm was adopted primarily for traditional MLP architectures. More recently two new revolutionary concepts were developed: Support Vector Machine and Extreme Learning Machines. They are very fast, but they train only shallow networks with one hidden layer. It was shown that these shallow networks have very limited capabilities. It has already demonstrated much higher capabilities of super compact architectures having 10 to 100 times more processing power than commonly used learning architectures For example, such a shallow MLP architecture with 10 neurons can solve only a Parity-9 problem, but a special deep FCC (Fully Connected Cascade) architecture with the same 10 neurons can solve as large a problem as a Parity-1023. Unfortunately, with the vanishing gradient problem), deep architectures are very difficult to train. By introducing additional connections across layers it was possible to efficiently train deep networks using the powerful NBN algorithm. Our early results show that there is a solution for this difficult problem.
Keywords :
"Neurons","Training","Biological neural networks","Computer architecture","Support vector machines","Artificial neural networks","FCC"
Conference_Titel :
Intelligent Engineering Systems (INES), 2015 IEEE 19th International Conference on
DOI :
10.1109/INES.2015.7329714