DocumentCode :
3759243
Title :
Multivariate Approximation Methods Using Polynomial Models: A Comparative Study
Author :
Lopez-Pe?a;Angel Kuri-Morales
Author_Institution :
Posgrado en Cienc. e Ing. de la Comput., IIMAS-UNAM, Mexico City, Mexico
fYear :
2015
Firstpage :
131
Lastpage :
138
Abstract :
A frequent problem in artificial intelligence is the one associated with the so-called supervised learning: the need to find an expression of a dependent variable as a function of several independent ones. There are several algorithms that allow us to find a solution to the bivariate problems. However, the true challenge arises when the number of independent variables is large. Relatively new tools have been developed to tackle this kind of problems. Thus, multi-Layer Perceptron networks (MLPs) may be seen as multivariate approximation algorithms. However, a commonly cited disadvantage of MLPs is that they remain a "black-box" kind of method: they do not yield an explicit closed expression to the solution. Rather, we are left with the need of expressing it via the architecture of the MLP and the value of the trained connections. In this paper we explore three methods that allow us to express the solution to multivariate problems in a closed form: a) Fast Ascent (FA), b) Levenberg-Marquardt (LM) and c) Powell´s Dog-Leg (PM) algorithms. These yield closed expressions when presented with multiple independent variable problems. In this paper we discuss and compare these four methods and their possible application to pattern recognition in mobile robot environments and artificial intelligence in general.
Keywords :
"Approximation algorithms","Damping","Artificial intelligence","Approximation error","Jacobian matrices","Urban areas"
Publisher :
ieee
Conference_Titel :
Artificial Intelligence (MICAI), 2015 Fourteenth Mexican International Conference on
Print_ISBN :
978-1-5090-0322-8
Type :
conf
DOI :
10.1109/MICAI.2015.26
Filename :
7429425
Link To Document :
بازگشت