Title :
Hierarchic function approximation in kd-Q-learning
Author :
Vollbrecht, Hans
Author_Institution :
Dept. of Neural Inf. Process., Ulm Univ., Germany
Abstract :
Function approximation is an important issue in reinforcement learning for control problems with continuous state space. A new learning algorithm is presented that approximates the quality function with a hierarchic discretization structure called kd-tree. It learns at the beginning for each experienced state transition simultaneously on several hierarchic levels representing different spatial generalizations. As learning proceeds, state transitions get increasingly refined by a descent in the kd-tree scaling down both the spatial and temporal generalization, the latter being the natural abstraction in action space. By increasing the representational complexity within the agent, we can reduce the learning effort considerably
Keywords :
function approximation; generalisation (artificial intelligence); learning (artificial intelligence); trees (mathematics); agent; continuous state space; hierarchic discretization structure; hierarchic function approximation; kd-Q-learning; kd-tree; quality function; reinforcement learning; spatial generalization; state transition; temporal generalization; Approximation error; Function approximation; Information processing; Intelligent systems; Learning; Optimal control; Partitioning algorithms; Spatial resolution; State estimation; State-space methods;
Conference_Titel :
Knowledge-Based Intelligent Engineering Systems and Allied Technologies, 2000. Proceedings. Fourth International Conference on
Conference_Location :
Brighton
Print_ISBN :
0-7803-6400-7
DOI :
10.1109/KES.2000.884090