Localist attractor networks

Citation
Rs. Zemel et Mc. Mozer, Localist attractor networks, NEURAL COMP, 13(5), 2001, pp. 1045-1064
Citations number
32
Categorie Soggetti
Neurosciences & Behavoir","AI Robotics and Automatic Control
Journal title
NEURAL COMPUTATION
ISSN journal
08997667 → ACNP
Volume
13
Issue
5
Year of publication
2001
Pages
1045 - 1064
Database
ISI
SICI code
0899-7667(200105)13:5<1045:LAN>2.0.ZU;2-H
Abstract
Attractor networks, which map an input space to a discrete output space, ar e useful for pattern completion-cleaning up noisy or missing input features . However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious a ttractors and ill-conditioned attractor basins. These difficulties occur be cause each connection in the network participates in the encoding of multip le attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although loc alist attractor networks have similar dynamics to their distributed counter parts, they are much easier to work with and interpret. We propose a statis tical formulation of localist attractor net dynamics, which yields a conver gence proof and a mathematical interpretation of model parameters. We prese nt simulation experiments that explore the behavior of localist attractor n etworks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological model s: priming (faster convergence to an attractor if the attractor has been re cently visited) and gang effects (in which the presence of an attractor enh ances the attractor basins of neighboring attractors).