DocumentCode :
729364
Title :
Memory efficient factored abstraction for reinforcement learning
Author :
Sahin, Coskun ; Cilden, Erkin ; Polat, Faruk
Author_Institution :
Dept. of Comput. Eng., Middle East Tech. Univ., Ankara, Turkey
fYear :
2015
fDate :
24-26 June 2015
Firstpage :
18
Lastpage :
23
Abstract :
Classical reinforcement learning techniques are often inadequate for problems with large state-space due to curse of dimensionality. If the states can be represented as a set of variables, it is possible to model the environment more compactly. Automatic detection and use of temporal abstractions during learning was proven to be effective to increase learning speed. In this paper, we propose a factored automatic temporal abstraction method based on an existing temporal abstraction strategy, namely extended sequence tree algorithm, by taking care of state differences via state variable changes. The proposed method has been shown to provide significant memory gain on selected benchmark problems.
Keywords :
learning (artificial intelligence); trees (mathematics); automatic detection; classical reinforcement learning technique; extended sequence tree algorithm; factored automatic temporal abstraction temporal abstraction strategy; learning speed; memory efficient factored abstraction; memory gain; state difference; Benchmark testing; Decision trees; History; Learning (artificial intelligence); Public transportation; Robot kinematics; extended sequence tree; factored MDP; learning abstractions; reinforcement learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on
Conference_Location :
Gdynia
Print_ISBN :
978-1-4799-8320-9
Type :
conf
DOI :
10.1109/CYBConf.2015.7175900
Filename :
7175900
Link To Document :
بازگشت