We introduce a novel view-based object representation, called the saliency
map graph (SMG), which captures the salient regions of an object view at mu
ltiple scales using a wavelet transform. This compact representation is hig
hly invariant to translation, rotation (image and depth), and scaling, and
offers the locality of representation required for occluded object recognit
ion. To compare two saliency map graphs, we introduce two graph similarity
algorithms. The first computes the topological similarity between two SMGs,
providing a coarse-level matching of two graphs. The second computes the g
eometrical similarity between two SMGs, providing a fine-level matching of
two graphs. We test and compare these two algorithms on a large database of
model object views. (C) 1999 Elsevier Science B.V. All rights reserved.