DocumentCode :
2049334
Title :
Continuous valued Q-learning method able to incrementally refine state space
Author :
Takeda, M. ; Nakamura, T. ; Ogasawara, T.
Author_Institution :
Wako Res. Center, Honda R&D Co. Ltd, Saitama, Japan
Volume :
1
fYear :
2001
fDate :
2001
Firstpage :
265
Abstract :
The conventional reinforcement learning method has problems in applying to real robot tasks, because such method must be able to represent the values in terms of infinitely many states and action pairs. In order to represent an action value function continuously, a function approximation method is usually applied. In our previous work (2000), we pointed out that this type of learning method potentially has a discontinuity problem of optimal actions for a given state. In this paper, we propose a method for estimating where a discontinuity of the optimal action takes place and for refining a state space incrementally. We call this method an continuous valued Q-learning method. To show the validity of our method, we apply the method to a simulated robot
Keywords :
learning (artificial intelligence); optimisation; robots; state estimation; state-space methods; action value function; continuous valued Q-learning; discontinuity; incremental refinement; robots; state estimation; state space; Information science; Learning systems; Orbital robotics; Quantization; Research and development; Robots; Space technology; State estimation; State-space methods; Statistical analysis;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on
Conference_Location :
Maui, HI
Print_ISBN :
0-7803-6612-3
Type :
conf
DOI :
10.1109/IROS.2001.973369
Filename :
973369
Link To Document :
بازگشت