Abstract :
The main objective of this research is to investigate
the eect of neural network architecture parameters on
model behavior. Neural network architectural factors
such as training algorithm, number of hidden layer neu-
rons, data set design in training stage and the changes
made to them, and nally its eect on the output of the
model were investigated. It developed a database for
modeling using by multi-layer perceptron. In particular,
the modeling process enjoyed three training algorithms:
Bayesian Regularization (BR), Scaled Conjugate Gradi-
ent (SCG) and Levenberg Marquardt (LM). Model se-
lection criteria based on the lowest error rate and data
regression, using a trial and error approach. The results
showed that models that greatly reduce the error have
less generalizability. In the meantime, the BR algorithm
with the data set design of 15-15-70 (for test, validation
and training sections, respectively), has been used to
reduce the error better than other algorithms, but improper generalizability. In contrast, the LM algorithm has better generalizability
than the other two algorithms. Data analysis shows that, in most cases, when the amounts
of data dedicated to test and validation change (increase or decrease), the model requires
more neurons in order to reduce errors.