From birth, human infants are immersed in a social environment that allows
them to learn by leveraging the skills and capabilities of their caregivers
. A critical pre-cursor to this type of social learning is the ability to m
aintain interaction levels that are neither overwhelming nor under-stimulat
ing. In this paper, we present a mechanism for an autonomous robot to regul
ate the intensity of its social interactions with a human. Similar to the f
eedback from infant to caregiver, the robot uses expressive displays to mod
ulate the interaction intensity. This mechanism is integrated within a gene
ral framework that combines perception, attention, drives, emotions, behavi
or selection, and motor acts. We present a specific implementation of this
architecture that enables the robot to react appropriately to both social s
timuli (faces) and non-social stimuli (moving toys) while maintaining a sui
table interaction intensity We present results from both face-to-face inter
actions and interactions mediated through a toy.