This paper recommends an alternative to solving the Bellman partial di
fferential equation for the value function in optimal control problems
involving stochastic differential or difference equations. It recomme
nds solving for the vector Lagrange multiplier associated with a first
-order condition for maximum. The method is preferable to Bellman's in
exploiting this first-order condition and in solving only algebraic e
quations in the control variable and Lagrange multiplier and its deriv
atives rather than a functional equation. The solution requires no glo
bal approximation of the value function and is likely to be more accur
ate than methods which are based on global approximations.