Conventional computer vision methods for determining a robot's end-effector
motion based on sensory data needs sensor calibration (e.g,, camera calibr
ation) and sensor-to-hand calibration (e.g., hand-eye calibration). This in
volves many computations and even some difficulties, especially when differ
ent kinds of sensors are involved. In this correspondence, we present a neu
ral network approach to the motion determination problem without any calibr
ation. Two kinds of sensory data, namely, camera images and laser range dat
a, are used as the input to a multilayer feedforward network to associate t
he direct transformation from the sensory data to the required motions. Thi
s provides a practical sensor fusion method. Using a recursive motion strat
egy and in terms of a network correction, we relax the requirement for the
exactness of the learned transformation. Another important feature of our w
ork is that the goal position can be changed without having to do network r
etraining. Experimental results show the effectiveness of our method.