DocumentCode
259578
Title
State Abstraction in Reinforcement Learning by Eliminating Useless Dimensions
Author
Zhao Cheng ; Ray, Laura E.
Author_Institution
Thayer Sch. of Eng., Dartmouth Coll., Hanover, NH, USA
fYear
2014
fDate
3-6 Dec. 2014
Firstpage
105
Lastpage
110
Abstract
Q-learning and other linear dynamic learning algorithms are subject to Bellman´s curse of dimensionality for any realistic learning problem. This paper introduces a framework for satisficing state abstraction -- one that reduces state dimensionality, improving convergence and reducing computational and memory resources -- by eliminating useless state dimensions. Statistical parameters that are dependent on the state and Q-values identify the relevance of a given state space to a task space and allow state elements that contribute least to task learning to be discarded. Empirical results of applying state abstraction to a canonical single-agent path planning task and to a more difficult multi-agent foraging problem demonstrate utility of the proposed methods in improving learning convergence and performance in resource-constrained learning problems.
Keywords
learning (artificial intelligence); multi-agent systems; statistical analysis; Bellman curse of dimensionality; Q-learning; canonical single-agent path planning task; linear dynamic learning algorithm; multiagent foraging problem; reinforcement learning; resource-constrained learning problem; state abstraction; state dimensionality; statistical parameter; Aerospace electronics; Convergence; Feature extraction; Indexes; Learning (artificial intelligence); Noise; Vectors; complexity reduction; intelligent agent; reinforcement learning; state abstraction;
fLanguage
English
Publisher
ieee
Conference_Titel
Machine Learning and Applications (ICMLA), 2014 13th International Conference on
Conference_Location
Detroit, MI
Type
conf
DOI
10.1109/ICMLA.2014.22
Filename
7033099
Link To Document