It is now well established that depth is coded by local horizontal dis
parity and global vertical disparity. We present a computational model
which explains how depth is extracted from these two types of dispari
ties. The model uses the two (one for each eye) headcentric directions
of binocular targets, derived from retinal signals and oculomotor sig
nals. Headcentric disparity is defined as the difference between headc
entric directions of corresponding features in the left and right eye'
s images. Using Helmholtz's coordinate systems we decompose headcentri
c disparity into azimuthal and elevational disparity. Elevational disp
arities of real objects are zero if the signals which contribute to he
adcentric disparity do not contain any errors. Azimuthal headcentric d
isparity is a ID quantity from which an exact equation relating distan
ce and disparity can be derived. The equation is valid for all headcen
tric directions and for all binocular fixation positions. Such an equa
tion does not exist if disparity is expressed in retinal coordinates.
Possible types of errors in oculomotor signals (six) produce global el
evational disparity fields which are characterised by different gradie
nts in the azimuthal and elevational directions. Computations show tha
t the elevational disparity fields uniquely characterise both the type
and size of the errors in oculomotor signals. Our model uses a measur
e of the global elevational disparity field together with local azimut
hal disparity to accurately derive headcentric distance throughout the
visual field. The model explains existing data on whole-field dispari
ty transformations as well as hitherto unexplained aspects of stereosc
opic depth perception. (C) 1998 Elsevier Science Ltd. All rights reser
ved.