Title :
On the system identification convergence model for perceptron learning algorithms
Author :
Shynk, John J. ; Bershad, Neil J.
Author_Institution :
Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
fDate :
31 Oct-2 Nov 1994
Abstract :
The convergence behavior of perceptron learning algorithms has been difficult to analyze because of their inherent nonlinearity and the lack of an appropriate model for the training signals. In many cases, extensive computer simulations have been the only way of quantifying their performance. Previously we introduced a stochastic convergence model based on a system identification formulation of the training data that allows one to derive closed-form expressions for the stationary points and cost functions, as well as deterministic recursions for the transient learning behavior. We provide an overview of this approach and describe how it is applied to single- and two-layer perceptron configurations
Keywords :
Gaussian processes; convergence; feedforward neural nets; identification; learning (artificial intelligence); multilayer perceptrons; stochastic processes; transient analysis; closed-form expressions; computer simulations; cost functions; deterministic recursions; nonlinearity; perceptron learning algorithms; single-layer perceptron; stationary points; stochastic Gaussian model; stochastic convergence model; system identification convergence model; training data; training signals; transient learning behavior; two-layer perceptron; Algorithm design and analysis; Closed-form solution; Computer simulation; Convergence; Cost function; Multilayer perceptrons; Signal analysis; Stochastic systems; System identification; Training data;
Conference_Titel :
Signals, Systems and Computers, 1994. 1994 Conference Record of the Twenty-Eighth Asilomar Conference on
Conference_Location :
Pacific Grove, CA
Print_ISBN :
0-8186-6405-3
DOI :
10.1109/ACSSC.1994.471587