Title :
Influence Graph based Task Decomposition and State Abstraction in Reinforcement Learning
Author :
Yu, Lasheng ; Hong, Fei ; Wang, PengRen ; Xu, Yang ; Liu, Yong
Author_Institution :
Sch. of Inf. Sci. & Eng., Central South Univ., Changsha
Abstract :
Task decomposition and state abstraction are crucial parts in reinforcement learning. It allows an agent to ignore aspects of its current states that are irrelevant to its current decision, and therefore speeds up dynamic programming and learning. This paper presents the SVI algorithm that uses a dynamic Bayesian network model to construct an influence graph that indicates relationships between state variables. SVI performs state abstraction for each subtask by ignoring irrelevant state variables and lower level subtasks. Experiment results show that the decomposition of tasks introduced by SVI can significantly accelerate constructing a near-optimal policy. This general framework can be applied to a broad spectrum of complex real world problems such as robotics, industrial manufacturing, games and others.
Keywords :
Bayes methods; dynamic programming; graph theory; learning (artificial intelligence); dynamic Bayesian network model; dynamic programming; influence graph; reinforcement learning; state abstraction; state variable influence algorithm; task decomposition; Acceleration; Bayesian methods; Dynamic programming; Function approximation; Information science; Learning; Manufacturing industries; Mice; Service robots; Stochastic processes; SVI algorithm; dynamic Bayesian network; influence graph; reinforcement learning; task decomposition;
Conference_Titel :
Young Computer Scientists, 2008. ICYCS 2008. The 9th International Conference for
Conference_Location :
Hunan
Print_ISBN :
978-0-7695-3398-8
Electronic_ISBN :
978-0-7695-3398-8
DOI :
10.1109/ICYCS.2008.34