We consider dynamic, two-player, zero-sum games where the "minimizing" play
er seeks to drive an underlying finite-state dynamic system to a special te
rminal state along a least expected cost path. The "maximizer" seeks to int
erfere with the minimizer's progress so as to maximize the expected total c
ost. We consider, for the first time, undiscounted finite-state problems, w
ith compact action spaces, and transition costs that are not strictly posit
ive. We admit that there are policies for the minimizer which permit the ma
ximizer to prolong the game indefinitely. Under assumptions which generaliz
e deterministic shortest path problems, we establish (i) the existence of a
real-valued equilibrium cost vector achievable with stationary policies fo
r the opposing players and (ii) the convergence of value iteration and poli
cy iteration to the unique solution of Bellman's equation.