Title :
Exploitation of an opponent´s imperfect information in a stochastic game with autonomous vehicle application
Author :
McEneaney, William M. ; Singh, Rajdeep
Author_Institution :
Mech. & Aerosp. Eng., California Univ., San Diego, La Jolla, CA, USA
Abstract :
We consider a finite state space, discrete stochastic game problem where only one player has perfect information. In the notation employed here only the "red" player has the perfect state information. The "blue" player only has access to observation-based information. To some degree the observations may be influenced by the controls of both players. A Markov chain model is used where the transition probabilities depend on the controls of the players. The game is zero-sum. It is known that application of the optimal control at a maximum likelihood estimate by the blue player is not optimal; under a saddle point condition, a form of certainty equivalence does exist for the blue player, but the structure is more complex than the above approach. In this work, the point of view of the red player is considered. Simulation is used to demonstrate that the optimal state feedback control for red is not the optimal control (even with perfect information for red). This is a significantly stronger statement than that certainty equivalence does not hold when the red player has imperfect information. A theory for the development of red control is presented. This yields "deceptive" controls in the presence of the simpler blue approach above, which provide superior performance in this case. An open question is whether (and under what conditions), this approach yields superior performance for red as compared with slate feedback when blue is allowed strategies including the more complex one above. Experimentation and theory are employed to answer this question.
Keywords :
Markov processes; command and control systems; optimal control; probability; state feedback; stochastic games; vehicles; Markov chain model; autonomous vehicle; certainty equivalence; deceptive controls; discrete stochastic game problem; finite state space; maximum likelihood estimate; observation-based information; opponent imperfect information; optimal control; optimal state feedback control; perfect state information; saddle point condition; simulation; transition probabilities; Aerodynamics; Automatic control; Maximum likelihood estimation; Mobile robots; Optimal control; Remotely operated vehicles; State estimation; State feedback; Stochastic processes; Vehicle dynamics;
Conference_Titel :
Decision and Control, 2004. CDC. 43rd IEEE Conference on
Print_ISBN :
0-7803-8682-5
DOI :
10.1109/CDC.2004.1429560