It is not uncommon now for geophysical inverse problems to be paramete
rized by 10(4) to 10(5) unknowns associated with upwards of 10(6) to 1
0(7) data constraints. The matrix problem defining the linearization o
f such a system (e.g., Am = b) is usually solved with a least-squares
criterion (m = (A(t)A) - 1 A(t)b). The size of the matrix, however, di
scourages the direct solution of the system and researchers often turn
to iterative techniques such as the method of conjugate gradients to
obtain an estimate of the least-squares solution. These iterative meth
ods take advantage of the sparseness of A, which often has as few as 2
-3 percent of its elements nonzero, and do not require the calculation
(or storage) of the matrix A(t)A. Although there are usually many mor
e data constraints than unknowns, these problems are, in general, unde
rdetermined and therefore require some sort of regularization to obtai
n a solution. When the regularization is simple damping, the conjugate
gradients method tends to converge in relatively few iterations. Howe
ver, when derivative-type regularization is applied (first derivative
constraints to obtain the flattest model that fits the data; second de
rivative to obtain the smoothest), the convergence of parts of the sol
ution may be drastically inhibited. In a series of 1-D examples and a
synthetic 2-D crosshole tomography example, we demonstrate this proble
m and also suggest a method of accelerating the convergence through th
e preconditioning of the conjugate gradient search directions. We deri
ve a 1-D preconditioning operator for the case of first derivative reg
ularization using a WKBJ approximation. We have found that preconditio
ning can reduce the number of iterations necessary to obtain satisfact
ory convergence by up to an order of magnitude. The conclusions we pre
sent are also relevant to Bayesian inversion, where a smoothness const
raint is imposed through an a priori covariance of the model.