MOBILE ROBOT SELF-LOCATION USING MODEL-IMAGE FEATURE CORRESPONDENCE

Citation
R. Talluri et Jk. Aggarwal, MOBILE ROBOT SELF-LOCATION USING MODEL-IMAGE FEATURE CORRESPONDENCE, IEEE transactions on robotics and automation, 12(1), 1996, pp. 63-77
Citations number
32
Categorie Soggetti
Computer Application, Chemistry & Engineering","Controlo Theory & Cybernetics","Robotics & Automatic Control","Engineering, Eletrical & Electronic
ISSN journal
1042296X
Volume
12
Issue
1
Year of publication
1996
Pages
63 - 77
Database
ISI
SICI code
1042-296X(1996)12:1<63:MRSUMF>2.0.ZU;2-T
Abstract
The problem of establishing reliable and accurate correspondence betwe en a stored 3-D model and a 2-D image of it is important in many compu ter vision tasks, including model-based object recognition, autonomous navigation, pose estimation, airborne surveillance, and reconnaissanc e. This paper presents an approach to solving this problem in the cont ext of autonomous navigation of a mobile robot in an outdoor urban, ma n-made environment. The robot's environment is assumed consist of poly hedral buildings. The 3-D descriptions of the lines constituting the b uildings' rooftops is assumed to be given as the world model. The robo t's position and pose are estimated by establishing correspondence bet ween the straight line features extracted from the images acquired by the robot and the model features. The correspondence problem is formul ated as a two-stage constrained search problem. Geometric visibility c onstraints are used to reduce the search space of possible model-image feature correspondences. Techniques for effectively deriving and capt uring these visibility constraints from the given world model are pres ented. The position estimation technique presented is shown to be robu st and accurate even in the presence of errors in the feature detectio n, incomplete model description, and occlusions. Experimental results of testing this approach using a model of an airport scene are present ed.