DocumentCode :
3743675
Title :
Online learning of optimal strategies in unknown environments
Author :
Santiago Paternain;Alejandro Ribeiro
Author_Institution :
Department of Electrical and Systems Engineering, University of Pennsylvania, 200 South 33rd Street, Philadelphia, 19104, USA
fYear :
2015
Firstpage :
3951
Lastpage :
3958
Abstract :
Define an environment as a set of convex constraint functions that vary arbitrarily over time and consider a cost function that is also convex and arbitrarily varying. Agents that operate in this environment intend to select actions that are feasible for all times while minimizing the cost´s time average. Such action is said optimal and can be computed offline if the cost and the environment are known a priori. An online policy is one that depends causally on the cost and the environment. To compare online policies to the optimal offline action define the fit of a trajectory as a vector that integrates the constraint violations over time and its regret as the cost difference with the optimal action accumulated over time. Fit measures the extent to which an online policy succeeds in learning feasible actions while regret measures its success in learning optimal actions. This paper proposes the use of online policies computed from a saddle point controller. It is shown that this controller produces policies with bounded regret and fit that grows at a sublinear rate. These properties provide an indication that the controller finds trajectories that are feasible and optimal in a relaxed sense. Concepts are illustrated throughout with the problem of a shepherd that wants to stay close to all sheep in a herd. Numerical experiments show that the saddle point controller allows the shepherd to do so.
Keywords :
"Trajectory","Linear programming","Cost function","Time factors","Heuristic algorithms","Minimization","Stacking"
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2015 IEEE 54th Annual Conference on
Type :
conf
DOI :
10.1109/CDC.2015.7402833
Filename :
7402833
Link To Document :
بازگشت