The aim of this article is to investigate:a mechanical description of
learning. A framework for local and simple learning algorithms based o
n interpreting a neural network as a set of configuration constraints
ia proposed. For any architectural design and learning task, unsupervi
sed and supervised algorithms can be derived, optionally using unconst
rained and hidden neurons. Unlike algorithms based on the gradient in
weight space, the proposed tangential correlation (TC) algorithms move
along the gradient in state space. This results in optimal scaling pr
operties and simple expressions for the weight updates. The number of
synapses is much larger than the number of neurons. A constraint for n
eural states does not impose a unique constraint for synaptic weights.
Which weights to assign credit to can be selected from a parametrizat
ion of all weight changes equivalently satisfying the state constraint
s. At the heart of the parametrization are minimal weight changes. Two
supervised algorithms (differing by their parametrizations) operating
on a three-layer perceptron are compared with standard backpropagatio
n. The successful training of fixed points of recurrent networks is de
monstrated. The unsupervised learning of oscillations with variable fr
equencies is performed on standard and more sophisticated recurrent ne
tworks. The results presented here can be useful both for the analysis
and for the synthesis of learning algorithms.