For range sensing using depth-from-defocus methods, the distance D of
a point object from the lens can be evaluated by the concise depth for
mula D = P/(Q - d(b)), where P and Q are constants for a given camera
setting and d(b) is the diameter of the blur circle for the point obje
ct on the image detector plane. The amount of defocus d(b) is traditio
nally estimated from the spatial parameter of a Gaussian point spread
function using a complex iterative solution. In this paper, we use a s
traightforward and computationally fast method to estimate the amount
of defocus from a single camera The observed gray-level image is initi
ally converted into a gradient image using the Sobel edge operator. Fo
r the edge point of interest, the proportion of the blurred edge regio
n p(e) in a small neighborhood window is then calculated using the mom
ent-preserving technique. The value of p(e) increases as the amount of
defocus increases and; therefore, is used as the description of degra
dation of the point-spread function. In addition to the use of the geo
metric depth formula for depth estimation, artificial neural networks
are also proposed in this study to compensate for the estimation error
s from the depth formula. Experiments have shown promising results tha
t the RMS depth errors are within 5% for the depth formula, and within
2% for the neural networks. (C) 1998 Pattern Recognition Society. Pub
lished by Elsevier Science Ltd. All rights reserved.