This article describes a self-organizing neural network architecture t
hat transforms optic now and eye position information into representat
ions of heading, scene depth, and moving object locations. These repre
sentations are used to navigate reactively in simulations involving ob
stacle avoidance and pursuit of a moving target. The network's weights
are trained during an action-perception cycle in which self-generated
eye and body movements produce optic now information, thus allowing t
he network to tune itself without requiring explicit knowledge of sens
or geometry. The confounding effect of eye movement during translation
is suppressed by learning the relationship between eye movement outfl
ow commands and the optic flow signals that they induce. The remaining
optic flow field is due to only observer translation and independent
motion of objects in the scene. A self-organizing feature map categori
zes normalized translational flow patterns, thereby creating a map of
cells that code heading directions. Heading information is then recomb
ined with translational now patterns in two different ways to form map
s of scene depth and moving object locations. Most of the learning pro
cesses take place concurrently and evolve through unsupervised learnin
g. Mapping the learned heading representations onto heading labels or
motor commands requires additional structure. Simulations of the netwo
rk verify its performance using both noise-free and noisy optic now in
formation.