Detailed processing of sensory information is a computationally demanding t
ask. This is especially true for vision, where the amount of information pr
ovided by the sensors typically exceeds the processing capacity of the syst
em. Rather than attempting to process all the sensory data simultaneously,
an effective strategy is to focus on subregions of the input space, shiftin
g from one subregion to the other, in a serial fashion. This strategy is co
mmonly referred to as selective attention. We present a neuromorphic active
-vision system, that implements a saliency-based model of selective attenti
on. Visual data is sensed and preprocessed in parallel by a transient image
r chip and transmitted to a selective-attention chip. This chip sequentiall
y selects the spatial locations of salient regions in the vision sensor's f
ield of view. A host computer uses the output of the selective-attention ch
ip to drive the motors on which the imager is mounted, and to orient it tow
ard the selected regions. The system's design framework is modular and allo
ws the integration of multiple sensors and multiple selective-attention chi
ps. We present experimental results showing the performance of a two-chip s
ystem in response to well-controlled test stimuli and to natural stimuli.