In this paper we present an unsupervised neural network which exhibits
competition between units via inhibitory feedback. The operation is s
uch as to minimize reconstruction error, both for individual patterns,
and over the entire training set. A key difference from networks whic
h perform principal components analysis, or one of its variants, is th
e ability to converge to non-orthogonal weight values. We discuss the
network's operation in relation to the twin goals of maximizing inform
ation transfer and minimizing code entropy, and show how the assignmen
t of prior probabilities to network outputs can help to reduce entropy
. We present results from two binary coding problems, and from experim
ents with image coding.