DocumentCode :
115714
Title :
Lipschitz robust control from off-policy trajectories
Author :
Fonteneau, Raphael ; Ernst, Damien ; Boigelot, Bernard ; Louveaux, Quentin
Author_Institution :
Dept. of Electr. Eng. & Comput. Sci., Univ. of Liege, Liege, Belgium
fYear :
2014
fDate :
15-17 Dec. 2014
Firstpage :
4924
Lastpage :
4931
Abstract :
We study the min max optimization problem introduced in [Fonteneau et al. (2011), “Towards min max reinforcement learning”, Springer CCIS, vol. 129, pp. 61-77] for computing control policies for batch mode reinforcement learning in a deterministic setting with fixed, finite optimization horizon. First, we state that the min part of this problem is NP-hard. We then provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, can also be solved in polynomial time. We theoretically show that both relaxation schemes provide better results than those given in [Fonteneau et al. (2011)].
Keywords :
computational complexity; learning (artificial intelligence); minimax techniques; robust control; trajectory control; Lagrangian relaxation; Lipschitz robust control; NP-hard problem; batch mode reinforcement learning; control policy; min max optimization problem; off-policy trajectory; polynomial time; relaxation scheme; Dispersion; Learning (artificial intelligence); Optimization; Polynomials; Search problems; Stochastic processes; Trajectory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on
Conference_Location :
Los Angeles, CA
Print_ISBN :
978-1-4799-7746-8
Type :
conf
DOI :
10.1109/CDC.2014.7040158
Filename :
7040158
Link To Document :
بازگشت