Author_Institution :
Massachusetts Institute of Technology, Cambridge, MA, USA
Abstract :
General solutions to the optimal stochastic control problem, or the combined estimation and control problem, are extremely difficult to compute since dynamic programming is required. However, if the system is linear, if the measurements are linear, and if the cost is quadratic, then the optimal stochastic controller is separated into 1) a filter to generate the conditional mean of the state, and 2) the optimum (linear) controller that results when all uncertainties are neglected. By altering the system configuration a new separation theorem is derived for arbitrary nonlinear measurements, discrete-time linear systems, and a quadratic cost. If a feedback loop is placed around the nonlinear measurement device (e.g., an analog-to-digital converter), then the stochastic control can be found without dynamic programming and is computed by cascading a nonlinear filter and the optimum (linear) controller. The primary advantage is the significant saving in computation. The performance of this new system configuration relative to the system without feedback depends on the nonlinearity, and it is not necessarily superior. A numerical example is presented.
Keywords :
Linear systems, time-varying discrete-time; Measurement; Optimal stochastic control; Stochastic optimal control; Control systems; Cost function; Dynamic programming; Feedback loop; Linear feedback control systems; Linear systems; Nonlinear filters; Optimal control; Stochastic processes; Stochastic systems;