Analytical model for the effects of learning on spike count distributions

Citation
G. Settanni et A. Treves, Analytical model for the effects of learning on spike count distributions, NEURAL COMP, 12(8), 2000, pp. 1773-1787
Citations number
11
Categorie Soggetti
Neurosciences & Behavoir","AI Robotics and Automatic Control
Journal title
NEURAL COMPUTATION
ISSN journal
08997667 → ACNP
Volume
12
Issue
8
Year of publication
2000
Pages
1773 - 1787
Database
ISI
SICI code
0899-7667(200008)12:8<1773:AMFTEO>2.0.ZU;2-V
Abstract
The spike count distribution observed when recording from a variety of neur ons in many different conditions has a fairly stereotypical shape, with a s ingle mode at zero or close to a low average count, and a long, quasi-expon ential tail to high counts. Such a distribution has been suggested to be th e direct result of three simple facts: the firing frequency of a typical co rtical neuron is close to linear in the summed input current entering the s oma, above a threshold; the input current varies on several timescales, bot h faster and slower than the window used to count spikes; and the input dis tribution at any timescale can be taken to be approximately normal. The thi rd assumption is violated by associative learning, which generates correlat ions between the synaptic weight vector on the dendritic tree of a neuron, and the input activity vectors it is repeatedly subject to. We show analyti cally that for a simple feedforward model, the normal distribution of the s low components of the input current becomes the sum of two quasi-normal ter ms. The term important below threshold shifts with learning, while the term important above threshold does not shift but grows in width. These deviati ons from the standard distribution may be observable in appropriate recordi ng experiments.