PROBABILITY DENSITY METHODS FOR SMOOTH FUNCTION APPROXIMATION AND LEARNING IN POPULATIONS OF TUNED SPIKING NEURONS

Authors
Citation
Td. Sanger, PROBABILITY DENSITY METHODS FOR SMOOTH FUNCTION APPROXIMATION AND LEARNING IN POPULATIONS OF TUNED SPIKING NEURONS, Neural computation, 10(6), 1998, pp. 1567-1586
Citations number
50
Categorie Soggetti
Computer Science Artificial Intelligence","Computer Science Artificial Intelligence
Journal title
ISSN journal
08997667
Volume
10
Issue
6
Year of publication
1998
Pages
1567 - 1586
Database
ISI
SICI code
0899-7667(1998)10:6<1567:PDMFSF>2.0.ZU;2-Y
Abstract
This article proposes a new method for interpreting computations perfo rmed by populations of spiking neurons. Neural firing is modeled as a rate-modulated random process for which the behavior of a neuron in re sponse to external input can be completely described by its tuning fun ction. I show that under certain conditions, cells with any desired tu ning functions can be approximated using only spike coincidence detect ors and linear operations on the spike output of existing cells. I sho w examples of adaptive algorithms based on only spike data that cause the underlying cell-tuning curves to converge according to standard su pervised and unsupervised learning algorithms. Unsupervised learning b ased on principal components analysis leads to independent cell spike trains. These results suggest a duality relationship between the rando m discrete behavior of spiking cells and the deterministic smooth beha vior of their tuning functions. Classical neural network approximation methods and learning algorithms based on continuous variables can thu s be implemented within networks of spiking neurons without the need t o make numerical estimates of the intermediate cell firing rates.