DocumentCode :
3636671
Title :
Using prior knowledge to accelerate online least-squares policy iteration
Author :
Lucian Buşoniu;Bart De Schutter;Robert Babuška;Damien Ernst
Author_Institution :
Delft University of Technology, the Netherlands
Volume :
1
fYear :
2010
Firstpage :
1
Lastpage :
6
Abstract :
Reinforcement learning (RL) is a promising paradigm for learning optimal control. Although RL is generally envisioned as working without any prior knowledge about the system, such knowledge is often available and can be exploited to great advantage. In this paper, we consider prior knowledge about the monotonicity of the control policy with respect to the system states, and we introduce an approach that exploits this type of prior knowledge to accelerate a state-of-the-art RL algorithm called online least-squares policy iteration (LSPI). Monotonic policies are appropriate for important classes of systems appearing in control applications. LSPI is a data-efficient RL algorithm that we previously extended to online learning, but that did not provide until now a way to use prior knowledge about the policy. In an empirical evaluation, online LSPI with prior knowledge learns much faster and more reliably than the original online LSPI.
Keywords :
"Acceleration","Control systems","Optimal control","Automatic control","Linear systems","Learning","Nonlinear control systems","Control nonlinearities","Nonlinear systems","Quadratic programming"
Publisher :
ieee
Conference_Titel :
Automation Quality and Testing Robotics (AQTR), 2010 IEEE International Conference on
Print_ISBN :
978-1-4244-6724-2
Type :
conf
DOI :
10.1109/AQTR.2010.5520917
Filename :
5520917
Link To Document :
بازگشت