This paper presents a robust, flexible and efficient algorithm to solv
e large scale linear inverse problems. The method is iterative and at
each iteration a perturbation in a q-dimensional subspace of an M-dime
nsional model space is sought. The basis vectors for the subspace are
primarily steepest descent vectors obtained from segmenting the data m
isfit and model objective functions. The efficiency of the algorithm i
s realized because only a q x q matrix needs to be inverted at each it
eration instead of a matrix of order M. As M becomes large the number
of computations per iteration is of order qNM where N is the number of
data. An important feature of our algorithm is that positivity can ea
sily be incorporated into the solution. We do this by introducing a tw
o-segment mapping which transforms positive parameters to parameters d
efined on the real line, The nonlinear mapping requires that a line, s
earch involving forward modelling is implemented so that at each itera
tion we obtain a model which misfits the data to a predetermined level
. This obviates the need to carry out additional inversions with trial
and error selection of a Lagrange multiplier. In this paper we presen
t the details of the subspace algorithm and explore the effect on conv
ergence of using different strategies for selecting basis vectors and
altering adjustable parameters which control the rate of decrease in t
he misfit and rate of increase in the model norm as a function of iter
ation number.