Although humans rely primarily on hearing to process speech, they can
also extract a great deal of information with their eyes through lipre
ading, This skill becomes extremely important when the acoustic signal
is degraded by noise, It would, therefore, be beneficial to find meth
ods to reinforce acoustic speech with a synthesized visual signal for
high noise environments, This paper addresses the interaction between
acoustic speech and visible speech, Algorithms for converting audible
speech into visible speech will be examined, and applications which ca
n utilize this conversion process will be presented, Our results demon
strate that it is possible to animate a natural-looking talking head u
sing acoustic speech as an input.