DocumentCode :
2770944
Title :
An Immune and a Gradient-Based Method to Train Multi-Layer Perceptron Neural Networks
Author :
Pasti, Rodrigo ; De Castro, Leandro Nunes
Author_Institution :
Catholic Univ. of Santos, Sao Paulo
fYear :
0
fDate :
0-0 0
Firstpage :
2075
Lastpage :
2082
Abstract :
Multi-layer perceptron (MLP) neural network training can be seen as a special case of function approximation, where no explicit model of the data is assumed. In its simplest form, it corresponds to finding an appropriate set of weights that minimize the network training and generalization errors. Various methods can be used to determine these weights, from standard optimization methods (e.g., gradient-based algorithms) to bio-inspired heuristics (e.g., evolutionary algorithms). Focusing on the problem of finding appropriate weight vectors for MLP networks, this paper proposes the use of an immune algorithm and a second-order gradient-based technique to train MLPs. Results are obtained for classification and function approximation tasks and the different approaches are compared in relation to the types of problems they are more suitable for.
Keywords :
function approximation; gradient methods; learning (artificial intelligence); multilayer perceptrons; MLP neural network training; bio-inspired heuristics; function approximation; immune algorithm; multilayer perceptron; second-order gradient-based technique; standard optimization methods; Backpropagation algorithms; Evolutionary computation; Function approximation; Heuristic algorithms; Immune system; Machine learning algorithms; Multi-layer neural network; Multilayer perceptrons; Neural networks; Optimization methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2006. IJCNN '06. International Joint Conference on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-9490-9
Type :
conf
DOI :
10.1109/IJCNN.2006.246977
Filename :
1716367
Link To Document :
بازگشت