This paper describes visual-based behaviors for docking operations in
mobile robotics. Two different situations are presented: in the ego-do
cking, each robot is equipped with a camera, and the motion is control
led when docking to a surface, whereas in the eco-docking, the camera
and all the necessary computational resources are placed in a single e
xternal docking station, which may serve several robots. In both situa
tions, the goal consists in controlling both the orientation, aligning
the camera optical axis with the surface normal, and the approaching
speed (slowing down during the maneuver). These goals are accomplished
without any effort to perform 3D reconstruction of the environment or
any need to calibrate the setup, in contrast with traditional approac
hes. Instead, we use image measurements directly to close the control
loop of the mobile robot. In the approach we propose, the robot motion
is directly driven by the first-order time-space image derivatives, w
hich can be estimated robustly and fast. The docking system is operati
ng in real time and the performance is robust both in the ego-docking
and eco-docking paradigms. Experiments are described. (C) 1997 Academi
c Press.