DocumentCode :
2970301
Title :
Winning-weighted competitive learning: a generalization of Kohonen learning
Author :
Wang, Zhicheng
Author_Institution :
Dept. of Electr. & Comput. Eng., Waterloo Univ., Ont., Canada
Volume :
3
fYear :
1993
fDate :
25-29 Oct. 1993
Firstpage :
2452
Abstract :
Kohonen learning is the basis of training a number of self-organising neural networks. It is an essential issue to build a nonparametric model to estimate a probability density function p(x) in the areas of vector quantization, pattern recognition, control, and many others. In this paper, the authors present a generalisation of Kohonen learning, winning-weighted competitive learning (WWCL), for better approximation of p(x) and fast learning convergence by introducing the principle of maximum information preservation into the learning. The formulas of competition rule with a winning-weighted distortion measure, win rate update, and synaptic vector learning law are given for the WWCL. The proposed learning algorithm is a promising alternative and improvement to the generalised Lloyd algorithm (GLA) which is an iterative descent algorithm with monotonically decreasing distortion function towards a local minimum and the most widely used technique for designing vector quantizers. Like Kohonen learning, the WWCL is an "on-line" algorithm where the codebook is designed while training data is arriving and the reduction of the distortion function is not necessarily monotonic. Actually, the term of maximum information preservation in the WWCL plays a role similar to a Lagrangian multiplier term and guides the learning toward a final globally optimal solution regardless of initial conditions. Experimental results for Gauss-Markov data sources have shown that the WWCL consistently provided better codebooks than Kohonen learning and the GLA. Therefore, the WWCL may prove to be a better learning algorithm than Kohonen learning for self-organising neural networks and a valuable alternative to both Kohonen learning and the GLA for vector quantizer design.
Keywords :
self-organising feature maps; unsupervised learning; vector quantisation; Gauss-Markov data sources; Kohonen learning; codebooks; competition rule; generalised Lloyd algorithm; iterative descent algorithm; maximum information preservation; nonparametric model; pattern recognition; probability density function; self-organising neural networks; synaptic vector learning law; vector quantization; win rate update; winning-weighted competitive learning; winning-weighted distortion measure; Algorithm design and analysis; Convergence; Distortion measurement; Iterative algorithms; Neural networks; Pattern recognition; Probability density function; Rate distortion theory; Training data; Vector quantization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1993. IJCNN '93-Nagoya. Proceedings of 1993 International Joint Conference on
Print_ISBN :
0-7803-1421-2
Type :
conf
DOI :
10.1109/IJCNN.1993.714220
Filename :
714220
Link To Document :
بازگشت