Learned models for estimation of rigid and articulated human motion from stationary or moving camera

Citation
Y. Yacoob et Ls. Davis, Learned models for estimation of rigid and articulated human motion from stationary or moving camera, INT J COM V, 36(1), 2000, pp. 5-30
Citations number
30
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
INTERNATIONAL JOURNAL OF COMPUTER VISION
ISSN journal
09205691 → ACNP
Volume
36
Issue
1
Year of publication
2000
Pages
5 - 30
Database
ISI
SICI code
0920-5691(200001)36:1<5:LMFEOR>2.0.ZU;2-8
Abstract
We propose an approach for modeling, measurement and tracking of rigid and articulated motion as viewed from a stationary or moving camera. We first p ropose an approach for learning temporal-flow models from exemplar image se quences. The temporal-flow models are represented as a set of orthogonal te mporal-flow bases that are learned using principal component analysis of in stantaneous flow measurements. Spatial constraints on the temporal-flow are then incorporated to model the movement of regions of rigid or articulated objects. These spatio-temporal flow models are subsequently used as the ba sis for simultaneous measurement and tracking of brightness motion in image sequences. Then we address the problem of estimating composite independent object and camera image motions. We employ the spatio-temporal flow models learned through observing typical movements of the object from a stationar y camera to decompose image motion into independent object and camera motio ns. The performance of the algorithms is demonstrated on several long image sequences of rigid and articulated bodies in motion.