This experiment examines how emotion is perceived by using facial and
vocal cues of a speaker. Three levels of facial affect were presented
using a computer-generated face. Three levels of vocal affect were obt
ained by recording the voice of a male amateur actor who spoke a seman
tically neutral word in different simulated emotional states. These tw
o independent variables were presented to subjects in all possible per
mutations-visual cues alone, vocal cues alone, and visual and vocal cu
es together-which gave a total set of 15 stimuli. The subjects were as
ked to judge the emotion of the stimuli in a two-alternative forced ch
oice task (either HAPPY or ANGRY). The results indicate that subjects
evaluate and integrate information from both modalities to perceive em
otion. The influence of one modality was greater to the extent that th
e other was ambiguous (neutral). The fuzzy logical model of perception
(FLMP) fit the judgments significantly better than an additive model,
which weakens theories based on an additive combination of modalities
, categorical perception, and influence from only a single modality.