Supervised neural-network learning algorithms have proven very success
ful at solving a variety of learning problems. However, they suffer fr
om a common problem of requiring explicit output labels. This requirem
ent makes such algorithms implausible as biological models. In this pa
per, it is shown that pattern classification can be achieved, in a mul
tilayered feedforward neural network, without requiring explicit outpu
t labels, by a process of supervised self-coding. The class projection
is achieved by optimizing appropriate within-class uniformity, and be
tween-class discernibility criteria, The mapping function and the clas
s labels are developed together, iteratively using the derived self-co
ding backpropagation algorithm. The ability of the self-coding network
to generalize on unseen data is also experimentally evaluated on real
data sets, and compares favorably with the traditional labeled superv
ision with neural networks, However, interesting features emerge out o
f the proposed self-coding supervision, which are absent in convention
al approaches. The further implications of supervised self-coding with
neural networks are also discussed.