Title :
Natural gradient actor-critic algorithms using random rectangular coarse coding
Author_Institution :
Dept. of Marine Eng., Kyushu Univ., Fukuoka
Abstract :
Learning performance of natural gradient actor-critic algorithms is outstanding especially in high-dimensional spaces than conventional actor-critic algorithms. However, representation issues of stochastic policies or value functions are remaining because the actor-critic approaches need to design it carefully. The author has proposed random rectangular coarse coding, that is very simple and suited for approximating Q-values in high-dimensional state-action space. This paper shows a quantitative analysis of the random coarse coding comparing with regular-grid approaches, and presents a new approach that combines the natural gradient actor-critic with the random rectangular coarse coding.
Keywords :
encoding; function approximation; gradient methods; learning (artificial intelligence); sampling methods; Gibbs sampling; Q-learning method; function approximation; high-dimensional state-action space; natural gradient actor-critic algorithm; random rectangular coarse coding; reinforcement learning; Automatic control; Costs; Function approximation; Gradient methods; Learning; Orbital robotics; Robot control; Robotics and automation; Sampling methods; Stochastic processes; Function approximation; Q-learning; Reinforcement learning; actor-critic; continuous state-action spaces;
Conference_Titel :
SICE Annual Conference, 2008
Conference_Location :
Tokyo
Print_ISBN :
978-4-907764-30-2
Electronic_ISBN :
978-4-907764-29-6
DOI :
10.1109/SICE.2008.4654995