Title :
Using competitive learning for state-space partitioning
Author :
Zhang, Bing ; Grant, Edward
Author_Institution :
Shgapore Inst. for Stand. & Ind. Res., Singapore
Abstract :
A control surface can be learned and represented by a neural network through the adoption of a reinforcement learning scheme. The authors use a neural network to learn a mapping between a dynamic system´s state space and the space of possible control actions. The system state space is incrementally defined, and an appropriate control action is assigned to each part of the state space from a binary vector input. One problem of this type of learning control is the learning of the state space partitioning itself, i.e., whether the system can automatically partition the state space into a number of control situations. If so, the learning can be achieved faster and in an optimal way. The unsupervised learning algorithm for adaptive state space partitioning is based both on BOXES, and on G.A. Carpenter and S. Grossberg´s (1988) ART network. The learning algorithm performed adequately in a series of performance trials, using the humanly partitioned BOXES learning algorithm as the performance measure
Keywords :
adaptive control; intelligent control; learning (artificial intelligence); neural nets; state-space methods; ART network; BOXES; adaptive resonance theory; competitive learning; dynamic systems; intelligent control; learning control; neural network; state-space partitioning; Automatic control; Control systems; Electrical equipment industry; Humans; Machine learning algorithms; Neural networks; Partitioning algorithms; Size control; State-space methods; Subspace constraints;
Conference_Titel :
Intelligent Control, 1992., Proceedings of the 1992 IEEE International Symposium on
Conference_Location :
Glasgow
Print_ISBN :
0-7803-0546-9
DOI :
10.1109/ISIC.1992.225123