Policy iteration is accelerated by substituting direct solution methods for
solving systems of linear equations with faster iterative ones. The iterat
ive methods used are nonstationary, or Krylov methods, that are very effici
ent at solving large sparse systems. Performance of accelerated policy iter
ation is evaluated by solving a standard stochastic growth model. Accelerat
ed policy iteration is up to 100 times faster and as accurate as standard p
olicy iteration and value iteration for problems of 'moderate' size (up to
1000 states). Further improvements are achieved by a multigrid algorithm, b
ased on the accelerated policy iteration. This algorithm is particularly ef
ficient at solving large problems (exceeding 100,000 states) where it can b
e several million times faster than standard policy iteration. (C) 2002 Els
evier Science B.V. All rights reserved.