We present an image-based technique to accelerate the navigation in co
mplex static environments. We perform an image-space simplification of
each sample of the scene taken at a particular viewpoint and dynamica
lly combine these simplified samples to produce images for arbitrary v
iewpoints. Since the scene is converted into a bounded complexity repr
esentation in the image space, with the base images rendered beforehan
d, the rendering speed is relatively insensitive to the complexity of
the scene. The proposed method correctly simulates the kinetic depth e
ffect (parallax), occlusion, and can resolve the missing visibility in
formation. This paper describes a suitable representation for the samp
les, a specific technique for simplifying them, and different morphing
methods for combining the sample information to reconstruct the scene
. We use hardware texture mapping to implement the image-space warping
and hardware affine transformations to compute the view-dependent war
ping function. (C) 1998 Elsevier Science Ltd. All rights reserved.