Learning overcomplete representations

Citation
Ms. Lewicki et Tj. Sejnowski, Learning overcomplete representations, NEURAL COMP, 12(2), 2000, pp. 337-365
Citations number
33
Categorie Soggetti
Neurosciences & Behavoir","AI Robotics and Automatic Control
Journal title
NEURAL COMPUTATION
ISSN journal
08997667 → ACNP
Volume
12
Issue
2
Year of publication
2000
Pages
337 - 365
Database
ISI
SICI code
0899-7667(200002)12:2<337:LOR>2.0.ZU;2-2
Abstract
In an overcomplete basis, the number of basis vectors is greater than the d imensionality of the input, and the representation of an input is not a uni que combination of basis vectors. Overcomplete representations have been ad vocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the d ata. Overcomplete codes have also been proposed as a model of some of the r esponse properties of neurons in primary visual cortex. Previous work has f ocused on finding the best representation of a signal using a fixed overcom plete basis (or dictionary). We present an algorithm for learning an overco mplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underl ying statistical distribution of the data and can thus lead to greater codi ng efficiency. This can be viewed as a generalization of the technique of i ndependent component analysis and provides a method for Bayesian reconstruc tion of signals in the presence of noise and for blind source separation wh en there are more sources than mixtures.