Human spatial encoding of three-dimensional navigable space was studied, us
ing a virtual environment simulation. This allowed subjects to become famil
iar with a realistic scene by making simulated rotational and translational
movements during training. Subsequent tests determined whether subjects co
uld generalize their recognition ability by identifying novel-perspective v
iews and topographic floor plans of the scene. Results from picture recogni
tion tests showed that familiar direction views were most easily recognized
, although significant generalization to novel views was observed. Topograp
hic floor plans were also easily identified. In further experiments, novel-
view performance diminished when active training was replaced by passive vi
ewing of static images of the scene. However, the ability to make self-init
iated movements, as opposed to watching dynamic movie sequences, had no eff
ect on performance. These results suggest that representation of navigable
space is view dependent and highlight the importance of spatial-temporal co
ntinuity during learning.