LEARNING INTERNAL REPRESENTATIONS IN AN ATTRACTOR NEURAL-NETWORK WITHANALOG NEURONS

Authors
Citation
Dj. Amit et N. Brunel, LEARNING INTERNAL REPRESENTATIONS IN AN ATTRACTOR NEURAL-NETWORK WITHANALOG NEURONS, Network, 6(3), 1995, pp. 359-388
Citations number
28
Categorie Soggetti
Mathematical Methods, Biology & Medicine",Neurosciences,"Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence
Journal title
ISSN journal
0954898X
Volume
6
Issue
3
Year of publication
1995
Pages
359 - 388
Database
ISI
SICI code
0954-898X(1995)6:3<359:LIRIAA>2.0.ZU;2-S
Abstract
A learning attractor neural network (LANN) With a double dynamics of n eural activities and synaptic efficacies, operating on two different t imescales is studied by simulations in preparation for an electronic i mplementation. The present network includes several quasirealistic fea tures: neurons are represented by their afferent currents and output s pike rates; excitatory and inhibitory neurons are separated; attractor spike rates as well as coding levels in arriving stimuli are low; lea rning takes place only between excitatory units. Synaptic dynamics is an unsupervised, analogue Hebbian process, but long term memory in the absence of neural activity is maintained by a refresh mechanism which on long timescales discretizes the synaptic values, converting learni ng into asynchronous stochastic process induced by the stimuli on the synaptic efficacies. This network is intended to learn a set of attrac tors from the statistics of freely arriving stimuli, which are represe nted by external synaptic inputs injected into the excitatory neurons. In the simulations different types of sequences of many thousands of stimuli are presented to the network, without distinguishing in the dy namics a learning phase from retrieval. Stimulus sequences differ in p re-assigned global statistics (including time-dependent statistics); i n orders of presentation of individual stimuli within a given statisti cs; in lengths of time intervals for each presentation and in the inte rvals separating one stimulus from another. We find that the network e ffectively learns a set of attractors representing the statistics of t he stimuli, and is able to modify its attractors when the input statis tics change. Moreover, as the global input statistics changes the netw ork can also forget attractors related to stimulus classes no longer p resented. Forgetting takes place only due to the arrival of new stimul i. The performance of the network and the statistics of the attractors are studied as a function of the input statistics. Most of the large- scale characteristics of the learning dynamics can be captured theoret ically. This model modifies a previous implementation of a LANN compos ed of discrete neurons, in a network of more realistic neurons. The di fferent elements have been designed to facilitate their implementation in silicon.