Title :
Gaussian perceptron; learning algorithms
Author :
Kwon, Taek Mu ; Zervakis, Michael E.
Author_Institution :
Dept. of Comput. Eng., Minnesota Univ., Duluth, MN, USA
Abstract :
The authors present a neural network structure whose nodes reflect a Gaussian-type activation function. Three learning algorithms are introduced and compared: the Gaussian perceptron learning (GPL) algorithm, which is based on the conventional perceptron convergence procedure; the least-squares error (LSE) algorithm, which follows the classical steepest descent approach; and the least-log squares error (LLSE) algorithm, which is a gradient method on a log objective function. In particular, the convergence of the GPL algorithm is proved. The performance of each algorithm is demonstrated through benchmark problems
Keywords :
convergence; learning (artificial intelligence); least squares approximations; neural nets; Gaussian perceptron learning; classical steepest descent approach; gradient method; learning algorithms; least-log squares error; least-squares error; log objective function; neural network structure; perceptron convergence; Computational modeling; Computer networks; Convergence; Gaussian processes; Gradient methods; Logic; Neural networks; Proposals; Shape; Vectors;
Conference_Titel :
Systems, Man and Cybernetics, 1992., IEEE International Conference on
Conference_Location :
Chicago, IL
Print_ISBN :
0-7803-0720-8
DOI :
10.1109/ICSMC.1992.271793