This paper describes an efficient method to make individual faces for anima
tion from several possible inputs. We present a method to reconstruct a thr
ee-dimensional (3D) facial model for animation from two orthogonal pictures
taken from front and side views, or from range data obtained from any avai
lable resources. It is based on extracting features on a face in a semiauto
matic way and modifying a generic model with detected feature points. Then
fine modifications follow if range data is available. Automatic texture map
ping is employed using an image composed from the two images. The reconstru
cted 3D-face can be animated immediately with given expression parameters.
Several faces by obtained one methodology applied to different input data t
o get a final animatable face are illustrated. (C) 2000 Elsevier Science B.
V. All rights reserved.