This paper presents a novel view-based approach to quantify and reproduce f
acial expressions, by systematically exploiting the degrees of freedom allo
wed by a realistic face model. This approach embeds efficient mesh morphing
and texture animations to synthesize facial expressions. We suggest using
eigenfeatures, built from synthetic images, and designing an estimator to i
nterpret the responses of the eigenfeatures on a facial expression in terms
of animation parameters.