DocumentCode :
3255565
Title :
State Aggregation by Growing Neural Gas for Reinforcement Learning in Continuous State Spaces
Author :
Baumann, Michael ; Büning, Hans Kleine
Author_Institution :
Int. Grad. Sch. of Dynamic Intell. Syst., Univ. of Paderborn, Paderborn, Germany
Volume :
1
fYear :
2011
fDate :
18-21 Dec. 2011
Firstpage :
430
Lastpage :
435
Abstract :
One of the conditions for the convergence of Q-Learning is to visit each state-action pair infinitely (or at least sufficiently) often. This requirement raises problems for large or continuous state spaces. Particularly, in continuous state spaces a discretization sufficiently fine to cover all relevant information usually results in an extremely large state space. In order to speed up and improve learning it is highly beneficial to add generalization to Q-Learning and thus being able to exploit experiences gained earlier. To achieve this, we compute a state space abstraction with a combination of growing neural gas and Q-Learning. This abstraction respects similarity in the state and action space and is constructed based on information achieved from interaction with the environment during learning. We examine the proposed algorithm on a continuous-state reinforcement learning problem and show that the approximated state space and the generalization speed up learning.
Keywords :
learning (artificial intelligence); neural nets; state-space methods; Q-learning convergence; action space abstraction; continuous state spaces; neural gas; reinforcement learning; state aggregation; state space abstraction; Approximation algorithms; Artificial neural networks; Function approximation; Neurons; Tiles; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning and Applications and Workshops (ICMLA), 2011 10th International Conference on
Conference_Location :
Honolulu, HI
Print_ISBN :
978-1-4577-2134-2
Type :
conf
DOI :
10.1109/ICMLA.2011.134
Filename :
6147011
Link To Document :
بازگشت