Previous model-based codecs have used two distinct approaches for mouth ani
mation, namely texture codebooks or semantic animation parameters. Each app
roach when used on its own suffers drawbacks in terms of a bit rate/quality
trade-off. An investigation is presented into the performance of a new hyb
rid scheme for mouth animation in which codebooks and semantics are combine
d and switching between them is achieved depending on the amount of motion
exhibited by the lips. Experiments show that both subjective quality and PS
NR measures of the reconstructed images are improved.