Sounds are often the result of motions of virtual objects in a virtual
environment. Therefore, sounds and the motions that caused them shoul
d be treated in an integrated way. When sounds and motions do not have
the proper correspondence the resultant confusion can lessen the effe
cts of each. In this paper, we present an integrated system for modeli
ng, synchronizing, and rendering sounds for virtual environments. The
key idea of the system is the use of a functional representation of so
unds, called timbre trees. This representation is used to model sounds
that are parameterizable. These parameters can then be mapped to the
parameters associated with the motions of objects in the environment T
his mapping allows the correspondence of motions and sounds in the env
ironment. Representing arbitrary sounds using timbre trees is a diffic
ult process that we do not address in this paper. We describe approach
es for creating some timbre trees including the use of genetic algorit
hms. Rendering the sounds in an aural environment is achieved by attac
hing special environmental nodes that represent the attenuation and de
lay as well as the listener effects to the timbre trees. These trees a
re then evaluated to generate the sounds. The system that we describe
runs parallel in real time or, an eight-processor SGI Onyx. We see the
main contribution of the present system as a conceptual framework on
which to consider the sound and motion in an integrated virtual enviro
nment.