AFFINE VISUAL SERVOING FOR ROBOT RELATIVE POSITIONING AND LANDMARK-BASED DOCKING

Citation
C. Colombo et al., AFFINE VISUAL SERVOING FOR ROBOT RELATIVE POSITIONING AND LANDMARK-BASED DOCKING, Advanced robotics, 9(4), 1995, pp. 463-480
Citations number
15
Categorie Soggetti
Robotics & Automatic Control
Journal title
ISSN journal
01691864
Volume
9
Issue
4
Year of publication
1995
Pages
463 - 480
Database
ISI
SICI code
0169-1864(1995)9:4<463:AVSFRR>2.0.ZU;2-0
Abstract
This paper addresses the problem of positioning a robot camera with re spect to a fixed object in space by means of visual information. The u ltimate goal of positioning is to achieve and/or to maintain a given s patial configuration (position and orientation) with respect to the ob jects in the environment so as to execute at best the task at hand. Po sitioning involves the control of 6 d.o.f. in space, which are conveni ently referred to as the parameters of the transformation between a ca mera-centered frame and an object-centered frame. In this paper, we wi ll address the positioning problem referring to these d.o.f.'s, regard less of the specific robot configuration used to move the camera (e.g. eye-in-hand setup, navigation platform with a robot head mounted on i t, etc.). The domain of application ranges from navigation tasks, (e.g . localization, docking, steering by means of natural landmarks), gras ping and manipulation tasks, and autonomous/intelligent tasks based on active visual behaviors such as reading a book or reaching and comman ding a control panel. The solution proposed in this work is to exploit the changes in shape of contours in order to plan and control the pos itioning process. In order to simplify and speed up the calculations, an affine camera model is used to describe the changes of shape of the contours in the image plane and an affine visual servoing (AVS) appro ach is derived. The choice of using two-dimensional (2D) features for control greatly enhances the robustness of the positioning process, in that robot kinematics and camera modeling errors are reduced. Among t he possible 2D features, visual contours enable us to achieve robust v isual estimates while keeping the dimensionality of the control equati ons low; the same would not be possible using different features such as points or lines. Finally, a feedforward control strategy complement s the feedback loop, thereby enhancing the speed and the overall perfo rmance of the algorithm. Although a stability analysis of the control scheme has not been performed yet, good simulation results with stable behavior, provided that proper tuning of control parameters and gains has been done, suggest that the approach might be successfully applie d in real world cases.