We describe a mechanism for biological learning and adaptation based on two
simple principles: (i) Neuronal activity propagates only through the netwo
rk's strongest synaptic connections (extremal dynamics), and (ii) the stren
gths of active synapses are reduced if mistakes are made, otherwise no chan
ges occur (negative feedback). The balancing of those two tendencies typica
lly shapes a synaptic landscape with configurations which are barely stable
, and therefore highly flexible. This allows for swift adaptation to new si
tuations. Recollection of past successes is achieved by punishing synapses
which have once participated in activity associated with successful outputs
much less than neurons that have never been successful. Despite its simpli
city, the model can readily learn to solve complicated nonlinear tasks, eve
n in the presence of noise. In particular, the learning time for the benchm
ark parity problem scales algebraically with the problem size N, with an ex
ponent k similar to 1.4.