Ki. Diamantaras et al., Optimal linear compression under unreliable representation and robust PCA neural models, IEEE NEURAL, 10(5), 1999, pp. 1186-1195
In a typical linear data compression system the representation variables re
sulting from the coding operation are assumed totally reliable and therefor
e the solution in the mean-squared-error sense is an orthogonal projector t
o the so-called principal component subspace, When the representation varia
bles are contaminated by additive noise which is uncorrelated with the sign
al, the problem is called noisy principal component analysis (NPCA) and the
optimal MSE solution is not a trivial extension of PCA, We first show that
the problem is not well defined unless we impose explicit or implicit cons
traints on either the coding or the decoding operator. Second, orthogonalit
y is not a property of the optimal solution under most constraints. Third,
the signal components may or may not be reconstructed depending on the nois
e level. As the noise power increases, we observe rank reduction in the opt
imal solution under most reasonable constraints, In these cases it appears
that it is preferable to omit the smaller signal components rather than att
empting to reconstruct them. This phenomenon has similarities with classica
l information theoretical results, notably the water-filling analogy, found
in parallel additive Gaussian noise channels. Finally, we show that standa
rd Hebbian-type PCA learning algorithms are not optimally robust to noise,
and propose a new Hebbian-type learning algorithm which is optimally robust
in the NPCA sense.