We describe a framework for aligning images without needing to establish ex
plicit feature correspondences. We assume that the geometry between the two
images can be adequately described by an affine transformation and develop
a framework that uses the statistical distribution of geometric properties
of image contours to estimate the relevant transformation parameters. The
estimates obtained using the proposed method are robust to illumination con
ditions, sensor characteristics, etc., since image contours are relatively
invariant to these changes. Moreover, the distributional nature of our meth
od alleviates some of the common problems due to contour fragmentation, occ
lusion, clutter, etc. We provide empirical evidence of the accuracy and rob
ustness of our algorithm. Finally, we demonstrate our method on both real a
nd synthetic images, including multisensor image pairs.