Autoassociative Neural Networks (AANNs) are most commonly used fur image da
ta compression. The goal of an AANN for image data is to have the network o
utput be 'similar' to the input. Most of the research in this area use back
propagation training with Mean-Squared Error (MSE) as the optimisation crit
eria. This paper presents an alternative error function called the Visual D
ifference Predictor (VDP) based on concepts from the human-visual system. U
sing the VDP as the error function provides a criteria to train an AANN mor
e efficiently, and results in faster convergence of the weights, while prod
ucing an output image perceived to be very similar by a human observer.