Assembly robots that use an active camera system for visual feedback c
an achieve greater flexibility, including the ability to operate in an
uncertain and changing environment. Incorporating active vision into
a robot control loop involves some inherent difficulties, including ca
libration, and the need for redefining the servoing goal as the camera
configuration changes. In this paper, we propose a novel self-organiz
ing neural network that learns a calibration-free spatial representati
on of 3D point targets in a manner that is invariant to changing camer
a configurations. This representation is used to develop a new framewo
rk for robot control with active vision. The salient feature of this f
ramework is that it decouples active camera control from robot control
. The feasibility of this approach is established with the help of com
puter simulations and experiments with the University of Illinois Acti
ve Vision System (UIAVS).