DocumentCode :
3014066
Title :
Dynamic optimization and learning for renewal systems
Author :
Neely, Michael J.
Author_Institution :
Electr. Eng. Dept., Univ. of Southern California, Los Angeles, CA, USA
fYear :
2010
fDate :
7-10 Nov. 2010
Firstpage :
681
Lastpage :
688
Abstract :
We consider the problem of optimizing time averages in systems with independent and identically distributed behavior over renewal frames. This includes scheduling and task processing to maximize utility in stochastic networks with variable length scheduling modes. Every frame, a new policy is implemented that affects the frame size and that creates a vector of attributes. An algorithm is developed for choosing policies on each frame in order to maximize a concave function of the time average attribute vector, subject to additional time average constraints. The algorithm is based on Lyapunov optimization concepts and involves minimizing a “drift-plus-penalty” ratio over each frame. The algorithm can learn efficient behavior without a-priori statistical knowledge by sampling from the past. Our framework is applicable to a large class of problems, including Markov decision problems.
Keywords :
Lyapunov methods; Markov processes; learning (artificial intelligence); scheduling; telecommunication network management; Lyapunov optimization; Markov decision; drift-plus-penalty ratio; dynamic optimization; frame size; renewal frames; renewal systems; stochastic networks; task processing; time average attribute vector; time averages; variable length scheduling modes; Approximation algorithms; Approximation methods; Convergence; Markov processes; Optimization; Time factors; Wireless communication;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Signals, Systems and Computers (ASILOMAR), 2010 Conference Record of the Forty Fourth Asilomar Conference on
Conference_Location :
Pacific Grove, CA
ISSN :
1058-6393
Print_ISBN :
978-1-4244-9722-5
Type :
conf
DOI :
10.1109/ACSSC.2010.5757648
Filename :
5757648
Link To Document :
بازگشت