DocumentCode :
502804
Title :
Stochastic optimization based on principal-agent problem
Author :
Ren, Xiaoyu ; Shao, Xinping ; Li, Shenghong
Author_Institution :
Dept. of Math., Zhejiang Univ., Hangzhou, China
Volume :
2
fYear :
2009
fDate :
8-9 Aug. 2009
Firstpage :
176
Lastpage :
179
Abstract :
By the theory of stochastic dynamic programming, we provide the methods for deriving the optimal rules. In this paper, we make two models in dynamic state process to maximize the expected utility of the agent and then obtain the famous Hamilton-Jacobi-Bellman equation. Furthermore, we derive explicit form solution and closed-form solution of the optimal equations for given utility functions.
Keywords :
dynamic programming; optimal control; stochastic programming; utility theory; Hamilton-Jacobi-Bellman equation; closed-form solution; dynamic state process; expected utility maximization; explicit form solution; optimal rule; principal-agent problem; stochastic dynamic programming; stochastic optimization; Communication system control; Differential equations; Dynamic programming; Mathematics; Nonlinear equations; Optimal control; Partial differential equations; Portfolios; Stochastic processes; Utility theory; HJB equation; principal-agent problem; stochastic differential equation; stochastic dynamic programming; stochastic optimal control;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computing, Communication, Control, and Management, 2009. CCCM 2009. ISECS International Colloquium on
Conference_Location :
Sanya
Print_ISBN :
978-1-4244-4247-8
Type :
conf
DOI :
10.1109/CCCM.2009.5267952
Filename :
5267952
Link To Document :
بازگشت