A new view-based approach to the representation and recognition of human mo
vement is presented. The basis of the representation is a temporal template
-a static vector-image where the vector value at each point is a function o
f the motion properties at the corresponding spatial location in an image s
equence. Using aerobics exercises as a test domain, we explore the represen
tational power of a simple, two component version of the templates: The fir
st value is a binary value indicating the presence of motion and the second
value is a function of the recency of motion in a sequence. We then develo
p a recognition method matching temporal templates against stored instances
of Views of known actions. The method automatically performs temporal segm
entation, is invariant to linear changes in speed, and runs in real-time on
standard platforms.