In a real-rime task an action must be executed given limited computati
on. One approach to limited computation is to search a tree of possibl
e action sequences to a fixed depth and then execute an action with th
e lowest associated backed-up cost. The standard algorithm for such a
search is Depth-First Branch-and-Bound (DFBB), also known in the Artif
icial Intelligence Literature as Minimin with Alpha Pruning. This arti
cle shows that a depth-bounded extension of a popular iterative algori
thm called IDA has a surprisingly large range of search trees on whic
h it outperforms DFBB-something previous analytical results do not pre
dict. We prove that the extended algorithm, which we call DIDA, is co
rrect, is guaranteed to terminate, and is asymptotically (i.e., on its
last iteration) as efficient as DFBB-assuming a consistent heuristic
is used. We also prove that both algorithms are guaranteed not to decr
ease their accuracy with a deeper search, assuming a consistent heuris
tic. Because accuracy is generally correlated with decision quality, t
he time saved by visiting fewer states translates to deeper searches w
hich translates to better decisions. Results from random search trees
show that DIDA is most efficient when the path cost + leaf node heuri
stic value is distributed with low variance; as branching factor incre
ases, the range for which DIDA is more efficient also increases. Resu
lts with Eight, Fifteen, Twenty-four, and Ninety-nine Puzzle implement
ations of both algorithms-all domains with low variance of path cost leaf node heuristic value-show that DIDA significantly outperforms D
FBB.