Images or videos may be imaged under different illuminants than models in a
n image or video proxy database. Changing illumination color in particular
may confound recognition algorithms based on color histograms or video segm
entation routines based on these. Here we show that a very simple method of
discounting illumination changes is adequate for both image retrieval and
video segmentation tasks. We develop a feature vector of only 36 values tha
t can also be used for both these objectives as well as for retrieval of vi
deo proxy images from a database. The new image metric is based on a color-
channel-normalization step, followed by reduction of dimensionality by goin
g to a chromaticity space. Treating chromaticity histograms as images, we p
erform an effective low-pass filtering of the histogram by first reducing i
ts resolution via a wavelet-based compression and then by a DCT transformat
ion followed by zonal coding. We show that the color constancy step - color
band normalization - can be carried out in the compressed domain for image
s that are stored in compressed form, and that only a small amount of image
information need be decompressed in order to calculate the new metric. The
new method performs better than previous methods tested for image or textu
re recognition and operates entirely in the compressed domain, on feature v
ectors. Apart from achieving illumination invariance for video segmentation
, so that, e.g. an actor stepping out of a shadow does not trigger the decl
aration of a false cut, the metric reduces all videos to a uniform scale. T
hus thresholds can be developed for a training set of videos and applied to
any new video, including streaming video, for segmentation as a one-pass o
peration. (C) 1999 Pattern Recogition Society. Published by Elsevier Scienc
e Ltd. All rights reserved.