A self-organizing neural network performing learning vector quantizati
on (LVQ) to compress image data is proposed. By using unsupervised lea
rning, our LVQ neural model approximates optimal pattern clustering fr
om training images through a memory adaptation process, and builds a c
ompression codebook in the synaptic weight matrix. The neural codebook
, trained by example pictures, can be used as a codec to compress and
decompress other pictures in a speedy fashion. Special image types, su
ch as fingerprints, verify this property in our experiments. Our appro
ach is compared with other recently developed neural VQ models (neural
gas, growing cell structures, and conscious competitive learning) and
methodological premises are discussed. The performance of our model i
s also compared with JPEG and wavelet methods. Other advantages of our
neural codec model are its low training time, high utilization of neu
rons, robust clustering capability, and simple computation. Further, o
ur model has some filtering effects through special training methods a
nd learning enhancement techniques for obtaining a compact neural code
book to yield both high compression and high picture quality.