Attractor networks, which map an input space to a discrete output space, ar
e useful for pattern completion-cleaning up noisy or missing input features
. However, designing a net to have a given set of attractors is notoriously
tricky; training procedures are CPU intensive and often produce spurious a
ttractors and ill-conditioned attractor basins. These difficulties occur be
cause each connection in the network participates in the encoding of multip
le attractors. We describe an alternative formulation of attractor networks
in which the encoding of knowledge is local, not distributed. Although loc
alist attractor networks have similar dynamics to their distributed counter
parts, they are much easier to work with and interpret. We propose a statis
tical formulation of localist attractor net dynamics, which yields a conver
gence proof and a mathematical interpretation of model parameters. We prese
nt simulation experiments that explore the behavior of localist attractor n
etworks, showing that they yield few spurious attractors, and they readily
exhibit two desirable properties of psychological and neurobiological model
s: priming (faster convergence to an attractor if the attractor has been re
cently visited) and gang effects (in which the presence of an attractor enh
ances the attractor basins of neighboring attractors).