Jf. Cohn et al., Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding, PSYCHOPHYSL, 36(1), 1999, pp. 35-43
The face is a rich source of information about human behavior. Available me
thods for coding facial displays, however, are human-observer dependent, la
bor intensive, and difficult to standardize. To enable rigorous and efficie
nt quantitative measurement of facial displays, we have developed an automa
ted method of facial display analysis. In this report, we compare the resul
ts with this automated system with those of manual FAGS (Facial Action Codi
ng System, Ekman & Friesen, 1978a) coding. One hundred university students
were videotaped while performing a series of facial displays. The image seq
uences were coded from videotape by certified FAGS coders. Fifteen action u
nits and action unit combinations that occurred a minimum of 25 times were
selected for automated analysis. Facial features were automatically tracked
in digitized image sequences using a hierarchical algorithm for estimating
optical flow. The measurements were normalized for variation in position,
orientation, and scale. The image sequences were randomly divided into a tr
aining set and a cross:validation set, and discriminant function analyses w
ere conducted on the feature point measurements. In the training set, avera
ge agreement with manual FAGS coding was 92% or higher for action units in
the brow, eye, and mouth regions. In the cross-validation set, average agre
ement was 91%, 88%, and 81% for action units in the brow, eye, and mouth re
gions, respectively. Automated face analysis by feature point tracking demo
nstrated high concurrent validity with manual FAGS coding.