In this paper we propose a specialized hardware architecture for the real t
ime visual navigation of a mobile robot. The adopted navigation method is b
ased on a two-steps approach. Features are extracted and matched over an im
age sequence which is captured by a video-camera (mounted on a mobile robot
) during its motion. As a result, a 2D motion field is recovered and used t
o extract ego-motion parameters. Our hardware implements the first step of
the method, which consists of feature extraction and raw match computation
by means of radiometric similarity computation. Real time performances are
allowed since a 40 MHz processing rate is achieved. (C) 2001 Academic Press
.