Y. Yacoob et Ls. Davis, Learned models for estimation of rigid and articulated human motion from stationary or moving camera, INT J COM V, 36(1), 2000, pp. 5-30
We propose an approach for modeling, measurement and tracking of rigid and
articulated motion as viewed from a stationary or moving camera. We first p
ropose an approach for learning temporal-flow models from exemplar image se
quences. The temporal-flow models are represented as a set of orthogonal te
mporal-flow bases that are learned using principal component analysis of in
stantaneous flow measurements. Spatial constraints on the temporal-flow are
then incorporated to model the movement of regions of rigid or articulated
objects. These spatio-temporal flow models are subsequently used as the ba
sis for simultaneous measurement and tracking of brightness motion in image
sequences. Then we address the problem of estimating composite independent
object and camera image motions. We employ the spatio-temporal flow models
learned through observing typical movements of the object from a stationar
y camera to decompose image motion into independent object and camera motio
ns. The performance of the algorithms is demonstrated on several long image
sequences of rigid and articulated bodies in motion.