PHOTOMOTION

Citation
R. Zhang et al., PHOTOMOTION, Computer vision and image understanding, 63(2), 1996, pp. 221-231
Citations number
19
Categorie Soggetti
Computer Sciences, Special Topics","Computer Science Software Graphycs Programming
ISSN journal
10773142
Volume
63
Issue
2
Year of publication
1996
Pages
221 - 231
Database
ISI
SICI code
1077-3142(1996)63:2<221:P>2.0.ZU;2-A
Abstract
Traditional shape from shading techniques, using a single image, do no t reconstruct accurate surfaces and have difficulty with shadow areas. Traditional shape from photometric stereo techniques have the disadva ntage that they need all of the input images together at once to minim ize the total cost, and this process must be restarted if new images b ecome available. To overcome the shortcomings of the above two techniq ues, we introduce a new technique called shape from photomotion. Shape from photomotion uses a series of 2-D Lambertian input images, genera ted by moving a light source around a scene, to recover the depth map. In each of the input images, the object in the scene remains at a fix ed position and the only variable is the light source direction. The m ovement of the Light source causes a change in the intensity of any gi ven point in the image. The change in intensity is what enables us to recover the unknown parameter, the depth map, since it remains constan t in each of the input images. This configuration is suitable for iter ative refinement through the use of the extended Kalman filter. Our no vel method for computing shape is a continuous form of the photometric stereo technique. It significantly differs from photometric stereo in the sense that the shape estimate will not only be computed for each light source orientation, but also gradually be refined by photomotion . Since the camera is fixed, the mapping between the depths at various light source locations is known; therefore, this method has an advant age over those which move the camera (egomotion) and keep the light so urce fixed. Results of this method are presented for sequences of synt hetic and real images. (C) 1996 Academic Press, Inc.