Computing the linear least-squares estimate of a high-dimensional random qu
antity given noisy data requires solving a large system of linear equations
. In many situations, one can solve this system efficiently using a Krylov
subspace method, such as the conjugate gradient ( CG) algorithm. Computing
the estimation error variances is a more intricate task. It is di cult beca
use the error variances are the diagonal elements of a matrix expression in
volving the inverse of a given matrix. This paper presents a method for usi
ng the conjugate search directions generated by the CG algorithm to obtain
a convergent approximation to the estimation error variances. The algorithm
for computing the error variances falls out naturally from a new estimatio
n-theoretic interpretation of the CG algorithm. This paper discusses this i
nterpretation and convergence issues and presents numerical examples. The e
xamples include a 10(5)-dimensional estimation problem from oceanography.