Lb. Wolff et E. Angelopoulou, 3-DIMENSIONAL STEREO BY PHOTOMETRIC RATIOS, Journal of the Optical Society of America. A, Optics, image science,and vision., 11(11), 1994, pp. 3069-3078
We present a novel robust methodology for corresponding a dense set of
points on an object surface from photometric values for three-dimensi
onal stereo computation of depth. The methodology utilizes multiple st
ereo pairs of images, with each stereo pair being taken of the identic
al scene but under different illumination. With just two stereo pairs
of images taken under two different illumination conditions, respectiv
ely, a stereo pair of ratio images can be produced, one for the ratio
of left-hand images and one for the ratio of right-hand images. We dem
onstrate how the photometric ratios composing these images can be used
for accurate correspondence of object points. Object points having th
e same photometric ratio with respect to two different illumination co
nditions constitute a well-defined equivalence class of physical const
raints defined by local surface orientation relative to illumination c
onditions. We formally show that for diffuse reflection the photometri
c ratio is invariant to varying camera characteristics, surface albedo
, and viewpoint and that therefore the same photometric ratio in both
images of a stereo pair implies the same equivalence class of physical
constraints. The correspondence of photometric ratios along epipolar
lines in a stereo pair of images under different illumination conditio
ns is therefore a robust correspondence of equivalent physical constra
ints, and the determination of depth from stereo can be performed with
out explicit knowledge of what these corresponding physical constraint
s actually are. Whereas illumination planning is required, our photome
tric-based stereo methodology does not require knowledge of illuminati
on conditions in the actual computation of three-dimensional depth and
is applicable to perspective views. This technique extends the stereo
determination of three-dimensional depth to smooth featureless surfac
es without the use of precisely calibrated lighting. We demonstrate ex
perimental depth maps from a dense set of points on smooth objects of
known ground-truth shape, determined to within 1% depth accuracy.