Title :
Controlling the hidden layers´ output to optimizing the training process in the Deep Neural Network algorithm
Author :
Andreas;Mauridhi Hery Purnomo;Mochamad Hariadi
Author_Institution :
Information System, Universitas Pelita Harapan Surabaya, Surabaya, Indonesia
fDate :
6/1/2015 12:00:00 AM
Abstract :
Deep learning is one of the most recent development form of Artificial Neural Network (ANN) in machine learning. Deep Neural Network (DNN) algorithm is usually used in image and speech recognition applications. As the development of Artificial Neural Network, very possible there are so many hidden layers in Deep Neural Network. In DNN, the output of each node is a quadratic function of its inputs. The DNN training process is very difficult. In this paper, we try to optimizing the training process by slightly construct of the deep architecture and combines several existing algorithms. Output´s error of each unit in the previous layer will be calculated. The weight of the unit with the smallest error will be maintained in the next iteration. This paper uses MNIST handwriting images as its data training and data test. After doing some tests, it can be concluded that the optimization by selecting any output in each hidden layer, the DNN training process will be faster approximately 8%.
Keywords :
"Training","Artificial neural networks","Computer architecture","Speech recognition","Image recognition","Backpropagation"
Conference_Titel :
Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on
Print_ISBN :
978-1-4799-8728-3
DOI :
10.1109/CYBER.2015.7288086