The human interface for computer graphics systems is evolving to invol
ve a multimodal approach. It is now moving from keyboard operation to
more natural modes of interaction using visual, audio and gestural mea
ns. This paper discusses real-time interaction using visual input from
a human face. It describes the underlying approach to recognizing and
analysing the facial movements of a real performance. The output in t
he form of parameters describing the facial expressions can then be us
ed to drive one or more applications running on the same or on a remot
e computer. This enables the user to control the graphics system by me
ans of facial expressions. This is used primarily as part of a real-ti
me facial animation system, where the synthetic actor reproduces the a
nimator's expression. This offers interesting possibilities for teleco
nferencing as the requirements for the network bandwidth are low (abou
t 7 kbit/s). Experiments are also done using facial movements to contr
ol a walkthrough or perform simple object manipulation.