We consider the control of a dynamic system modeled as a Markov chain. The transition probability matrix of the Markov chain depends on the control

and also on an unknown parameter α
0. The unknown parameter belongs to a given finite set

. The long run average cost depends on the control policy and the unknown parameter. Thus, a direct approach to the optimization of the performance is not feasible. A common procedure calls for an on-line estimation of the unknown parameter and the minimization of the cost functional using the estimate in lieu of the true parameter. It is well known that this "certainty equivalence" (CE) solution may fail to achieve optimal performance, even asymptotically. In this presentation of a new optimization-oriented approach to adaptive control, we consider a composite functional which simultaneously takes care of the estimation and control needs. The global minimum of this composite functional coincides with the minimum of the original cost functional. Thus, its joint minimization with respect to control and parameter estimates would yield the optimal control policy. This joint minimization is not feasible, but it suggests an algorithm that asymptotically achieves the desired goal. The transient behavior of the algorithm, as well as the situation when

are also investigated.