We are surrounded by surfaces that we perceive by visual means. Unders
tanding the basic principles behind this perceptual process is a centr
al theme in visual psychology, psychophysics, and computational vision
. In many of the computational models employed in the past, it has bee
n assumed that a metric representation of physical space can be derive
d by visual means. Psychophysical experiments, as well as computationa
l considerations, can convince us that the perception of space and sha
pe has a much more complicated nature, and that only a distorted versi
on of actual, physical space can be computed. This paper develops a co
mputational geometric model that explains why such distortion might ta
ke place. The basic idea is that, both in stereo and motion, we percei
ve the world from multiple views. Given the rigid transformation betwe
en the views and the properties of the image correspondence, the depth
of the scene can be obtained. Even a slight error in the rigid transf
ormation parameters causes distortion of the computed depth of the sce
ne. The unified framework introduced here describes this distortion in
computational terms. We characterize the space of distortions by its
level sets, that is, we characterize the systematic distortion via a f
amily of iso-distortion surfaces which describes the locus over which
depths are distorted by some multiplicative factor. Given that humans?
estimation of egomotion or estimation of the extrinsic parameters of
the stereo apparatus is likely to be imprecise, the framework is used
to explain a number of psychophysical experiments on the perception of
depth from motion or stereo.