We describe a system to synthesize facial expressions by editing captured p
erformances. For this purpose, we use the actuation of expression muscles t
o control facial expressions. We note that there have been numerous algorit
hms already developed for editing gross body motion. While tire joint angle
has direct effect on the configuration of the gross body, the muscle actua
tion has to go through a complicated mechanism to produce facial expression
s. Therefore, rue devote a significant part of this paper to establishing t
he relationship between muscle actuation and facial surface deformation. We
model the skill surface using the finite element method to simulate the de
formation caused by expression muscles. Then toe implement the inverse rela
tionship, Muscle actuation parameter estimation, to find the muscle actuati
on values from the trajectories of the markers on the performer's face. Onc
e the forward and inverse relationships are established, retargeting or edi
ting a performance becomes an easy job. We apply the original performance d
ata to different facial models with equivalent muscle structures, to produc
e similar expressions. We also produce novel expressions by deforming the o
riginal data curves of muscle actuation to satisfy the key-frame constraint
s imposed by animators. Copyright (C) 2001 John Wiley & Sons, Ltd.