An optimal stochastic control problem is considered with a probability criterion. A stochastic differential equation model is assumed. The optimization problem is to maximize the probability that the state trajectory remain in a given bounded region

over a given finite time interval. This type of criterion is especially relevant to certain technical problems where it is essential that certain state variables not exceed given values, and is closely related to the concepts of finite time stability. For the class of dynamical systems considered, the optimal control is bang-bang. A numerical method developed by Samarskii is used to solve the optimization equation, a nonlinear partial differential equation of the parabolic type. A second-order system is computed to illustrate numerical results. Time-varying switching curves for the optimal bang-bang solution are plotted.