In social environments, multiple sensory channels are simultaneously engage
d in the service of communication. In this experiment, we were concerned wi
th defining the neuronal mechanisms for a perceptual bias in processing sim
ultaneously presented emotional voices and faces. Specifically, we were int
erested in how bimodal presentation of a fearful voice facilitates recognit
ion of fearful facial expression. By using event-related functional MRI, th
at crossed sensory modality (visual or auditory) with emotional expression
(fearful or happy), we show that perceptual facilitation during face fear p
rocessing is expressed through modulation of neuronal responses in the amyg
dala and the fusiform cortex. These data suggest that the amygdala is impor
tant for emotional crossmodal sensory convergence with the associated perce
ptual bias during fear processing, being mediated by task-related modulatio
n of face-processing regions of fusiform cortex.