Vector quantization is a data compression method where a set of data p
oints is encoded by a reduced set of reference vectors, the codebook.
A vector quantization strategy is discussed that jointly optimizes dis
tortion errors and the codebook complexity, thereby determining the si
ze of the codebook. A maximum entropy estimation of the cost function
yields an optimal number of reference vectors, their positions and the
ir assignment probabilities. The dependence of the codebook density on
the data density for different complexity functions is investigated i
n the limit of asymptotic quantization levels. How different complexit
y measures influence the efficiency of vector quantizers is studied fo
r the task of image compression, i.e., we quantize the wavelet coeffic
ients of gray-level images and measure the reconstruction error. Our a
pproach establishes a unifying framework for different quantization me
thods like K-means clustering and its fuzzy version, entropy constrain
ed vector quantization or topological feature maps and competitive neu
ral networks.