Despite extensive research on the influence of visual, vestibular and somat
osensory information on human postural control, it remains unclear how thes
e sensory channels are fused for self-orientation. The focus of the present
study was to test whether a linear additive model could account for the fu
sion of touch and vision for postural control. We simultaneously manipulate
d visual and somatosensory (touch) stimuli in five conditions of single- an
d multisensory stimulation. The visual stimulus was a display of random dot
s projected onto a screen in front of the standing subject. The somatosenso
ry stimulus was a rigid plate which subjects contacted lightly (<1 N of for
ce) with their right index fingertip. In each condition, one sensory stimul
us oscillated (dynamic) in the medial-lateral direction while the other sti
mulus was either dynamic, static or absent. The results qualitatively suppo
rted five predictions of the linear additive model in that the patterns of
gain and variability across conditions were consistent with model predictio
ns. However, a strict quantitative comparison revealed significant deviatio
ns from model. predictions, indicating that the sensory fusion process clea
rly has nonlinear aspects. We suggest that the sensory fusion process behav
ed in an approximately linear fashion because the experimental paradigm tes
ted postural control very close to the equilibrium point of vertical uprigh
t.