LINEAR REDUNDANCY REDUCTION LEARNING

Citation
G. Deco et D. Obradovic, LINEAR REDUNDANCY REDUCTION LEARNING, Neural networks, 8(5), 1995, pp. 751-755
Citations number
22
Categorie Soggetti
Mathematical Methods, Biology & Medicine","Computer Sciences, Special Topics","Computer Science Artificial Intelligence",Neurosciences,"Physics, Applied
Journal title
ISSN journal
08936080
Volume
8
Issue
5
Year of publication
1995
Pages
751 - 755
Database
ISI
SICI code
0893-6080(1995)8:5<751:LRRL>2.0.ZU;2-2
Abstract
Feature extraction from any combination of sensory stimuli can be seen as a detection of statistically correlated combination of inputs. A m athematical framework that describes this fact is formulated using con cepts of the Information Theory. The key idea is to define a bijective transformation that conserves the volume in order to assure the trans mission of all the information from inputs to outputs without spurious generation of entropy. In addition, this transformation simultaneousl y constrains the distribution of the outputs so that the representatio n is factorial, i.e., the redundancy at the output layer is minimal. W e formulate this novel unsupervised learning paradigm for a linear net work. The method converges in the linear case to the principal compone nt transformation. Contrary to the ''infomax'' principle, we minimize the mutual information between the output neurons provided that the tr ansformation conserves the entropy in the vertical sense (from input t o outputs).