Title :
On convergence properties of pocket algorithm
Author_Institution :
Istituto per i Circuiti Elettronici, CNR, Genova, Italy
fDate :
5/1/1997 12:00:00 AM
Abstract :
The problem of finding optimal weights for a single threshold neuron starting from a general training set is considered. Among the variety of possible learning techniques, the pocket algorithm has a proper convergence theorem which asserts its optimality. However, the original proof ensures the asymptotic achievement of an optimal weight vector only if the inputs in the training set are integer or rational. This limitation is overcome in this paper by introducing a different approach that leads to the general result. Furthermore, a modified version of the learning method considered, called pocket algorithm with ratchet, is shown to obtain an optimal configuration within a finite number of iterations independently of the given training set
Keywords :
convergence of numerical methods; iterative methods; learning (artificial intelligence); optimisation; perceptrons; convergence; iterative method; neural networks; optimal learning; optimal weight vector; optimal weights; perceptron; pocket algorithm; threshold neuron; Chaos; Convergence; Cost function; Learning systems; Least squares approximation; Neural networks; Neurons; Optimization methods; Risk management; Speech recognition;
Journal_Title :
Neural Networks, IEEE Transactions on