Since the late 1960s autonomous robots have been the subject of worldwide r
esearch efforts. Various techniques exist that enable mobile robots to navi
gate robustly within their environments. Some systems are commercially avai
lable, most of them for transport and floor-cleaning applications. However,
in general they are not truly autonomous, as they require human aid to bui
ld appropriate environment models, e.g. navigation maps, which they need fo
r planning. But even high-end research robots normally need help to configu
re and adjust their sensors, e.g. vision systems where the user has to tune
lots of parameters before the robot can 'see' in the given environment. Re
al service robots will have to to do this autonomously, as no helping scien
tist will be available. This paper presents a first step into this directio
n. It shows how a useful, self-learning vision system can be constructed, a
nd that such a system is able to supply the robot with the information requ
ired to 'survive' in complex everyday environments. (C) 1999 Elsevier Scien
ce Ltd. All rights reserved.