Optimal linear compression under unreliable representation and robust PCA neural models

Citation
Ki. Diamantaras et al., Optimal linear compression under unreliable representation and robust PCA neural models, IEEE NEURAL, 10(5), 1999, pp. 1186-1195
Citations number
20
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
10
Issue
5
Year of publication
1999
Pages
1186 - 1195
Database
ISI
SICI code
1045-9227(199909)10:5<1186:OLCUUR>2.0.ZU;2-5
Abstract
In a typical linear data compression system the representation variables re sulting from the coding operation are assumed totally reliable and therefor e the solution in the mean-squared-error sense is an orthogonal projector t o the so-called principal component subspace, When the representation varia bles are contaminated by additive noise which is uncorrelated with the sign al, the problem is called noisy principal component analysis (NPCA) and the optimal MSE solution is not a trivial extension of PCA, We first show that the problem is not well defined unless we impose explicit or implicit cons traints on either the coding or the decoding operator. Second, orthogonalit y is not a property of the optimal solution under most constraints. Third, the signal components may or may not be reconstructed depending on the nois e level. As the noise power increases, we observe rank reduction in the opt imal solution under most reasonable constraints, In these cases it appears that it is preferable to omit the smaller signal components rather than att empting to reconstruct them. This phenomenon has similarities with classica l information theoretical results, notably the water-filling analogy, found in parallel additive Gaussian noise channels. Finally, we show that standa rd Hebbian-type PCA learning algorithms are not optimally robust to noise, and propose a new Hebbian-type learning algorithm which is optimally robust in the NPCA sense.