A self-localization system for autonomous mobile robots is presented. This
system estimates the robot position in previously learned environments, usi
ng data provided solely by an omnidirectional visual perception subsystem c
omposed of a camera and of a special conical reflecting surface. It perform
s an optical pre-processing of the environment, allowing a compact represen
tation of the collected data. These data are then fed to a learning subsyst
em that associates the perceived image to an estimate of the actual robot p
osition. Both neural networks and statistical methods have been tested and
compared as learning subsystems. The system has been implemented and tested
and results are presented. (C) 2001 Elsevier Science B.V. All rights reser
ved.