Let x(t) = (x(1)(t),x(2)(t)) be defined by the stochastic differential equa
tions dx(i)(t) = a(i)[x(t)]dt + Sigma (2)(j=1) b(ij)[x(t)]u(j)(t)dt + c(i)(
1/ 2)[x(t)]dW(i)(t), where W-i is a standard Brownian motion, for i = 1, 2.
There are two optimizers. The first one, using u(1) (t), tries to minimize
the expected value of a quadratic cost criterion J, while the second one,
using u(2)(t), wants to maximize this expected value. The game ends the fir
st time x(t) reaches a subset of IR2. It is shown that it is sometimes poss
ible to linearize the dynamic programming equation that must be solved to o
btain the optimal value of u(i)(t). Examples are solved explicitly.