DocumentCode :
1877365
Title :
Reinforcement learning generalization using state aggregation with a maze-solving problem
Author :
Gunady, Mohamed K. ; Goma, Walid
Author_Institution :
Dept. of Comput. Sci. Eng., Egypt-Japan Univ. of Sci. & Technol., Alexandria, Egypt
fYear :
2012
fDate :
6-9 March 2012
Firstpage :
157
Lastpage :
162
Abstract :
Reinforcement learning (RL) depends on constructing a lookup table for the value function of state-action pairs. Consequently, when learning in environments with large-scale state-action space, RL fails to achieve practical convergence rates. Therefore, the need for generalizing the original state-action space into more compact representation is crucial for many practical applications. In this paper, we propose a generalization technique using `state aggregation´. We apply this generalization technique to Q-learning, and show how to aggregate similar states together. The modified RL system architecture is presented along with the new algorithm. The proposed approach is tested and analyzed on a maze problem.
Keywords :
learning (artificial intelligence); table lookup; generalization technique; large-scale state-action space; lookup table; maze-solving problem; q-learning; reinforcement learning generalization; state aggregation; state-action pairs; value function; Decision support systems; Handheld computers; Q-learning; generalization; reinforcement learning; state aggregation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Electronics, Communications and Computers (JEC-ECC), 2012 Japan-Egypt Conference on
Conference_Location :
Alexandria
Print_ISBN :
978-1-4673-0485-6
Type :
conf
DOI :
10.1109/JEC-ECC.2012.6186975
Filename :
6186975
Link To Document :
بازگشت