NONLINEAR NEURONS IN THE LOW-NOISE LIMIT - A FACTORIAL CODE MAXIMIZESINFORMATION-TRANSFER

Authors
Citation
Jp. Nadal et N. Parga, NONLINEAR NEURONS IN THE LOW-NOISE LIMIT - A FACTORIAL CODE MAXIMIZESINFORMATION-TRANSFER, Network, 5(4), 1994, pp. 565-581
Citations number
38
Categorie Soggetti
Mathematical Methods, Biology & Medicine",Neurosciences,"Engineering, Eletrical & Electronic","Computer Science Artificial Intelligence
Journal title
ISSN journal
0954898X
Volume
5
Issue
4
Year of publication
1994
Pages
565 - 581
Database
ISI
SICI code
0954-898X(1994)5:4<565:NNITLL>2.0.ZU;2-H
Abstract
We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focusing on the case of nonlinear transfer functions. We assume that both recep tive fields (synaptic efficacies) and transfer functions can be adapte d to the environment. The main result is that, for bounded and inverti ble transfer functions, in the case of a vanishing additive output noi se, and no input noise, maximization of information (Linsker's infomax principle) leads to a factorial code - hence to the same solution as required by the redundancy-reduction principle of Barlow. We also show that this result is valid for linear and, more generally, unbounded, transfer functions, provided optimization is performed under an additi ve constraint, i.e. which can be written as a sum of terms, each one b eing specific to one output neuron. Finally, we study the effect of a non-zero input noise. We find that, to first order in the input noise, assumed to be small in comparison with the (small) output noise, the above results are still valid, provided the output noise is uncorrelat ed from one neuron to the other.