Unconstrained optimization problems are closely related to systems of ordin
ary differential equations (ODEs) with gradient structure. In this work, we
prove results that apply to both areas. We analyze the convergence propert
ies of a trust region, or Levenberg-Marquardt, algorithm for optimization.
The algorithm may also be regarded as a linearized implicit Euler method wi
th adaptive timestep for gradient ODEs. From the optimization viewpoint, th
e algorithm is driven directly by the Levenberg-Marquardt parameter rather
than the trust region radius. This approach is discussed, for example, in [
R. Fletcher, Practical Methods of Optimization, 2nd ed., John Wiley, New Yo
rk, 1987], but no convergence theory is developed. We give a rigorous error
analysis for the algorithm, establishing global convergence and an unusual
, extremely rapid, type of superlinear convergence. The precise form of sup
erlinear convergence is exhibited-the ratio of successive displacements fro
m the limit point is bounded above and below by geometrically decreasing se
quences. We also show how an inexpensive change to the algorithm leads to q
uadratic convergence. From the ODE viewpoint, this work contributes to the
theory of gradient stability by presenting an algorithm that reproduces the
correct global dynamics and gives very rapid local convergence to a stable
steady state.