Most models of visual search, whether involving overt eye movements or cove
rt shifts of attention, are based on the concept of a saliency map, that is
, an explicit two-dimensional map that encodes the saliency or conspicuity
of objects in the visual environment. Competition among neurons in this map
gives rise to a single winning location that corresponds to the next atten
ded target. Inhibiting this location automatically allows the system to att
end to the next most salient location. We describe a detailed computer impl
ementation of such a scheme, focusing on the problem of combining informati
on across modalities, here orientation, intensity and color information, in
a purely stimulus-driven manner. The model is applied to common psychophys
ical stimuli as well as to a very demanding visual search task. Its success
ful performance is used to address the extent to which the primate visual s
ystem carries out visual search via one or more such saliency maps and how
this can be tested. (C) 2000 Elsevier Science Ltd. All rights reserved.