We prove that maximization of mutual information between the output an
d the input of a feedforward neural network leads to full redundancy r
eduction under the following sufficient conditions: (i) the input sign
al is a (possibly nonlinear) invertible mixture of independent compone
nts; (ii) there is no input noise; (iii) the activity of each output n
euron is a (possibly) stochastic variable with a probability distribut
ion depending on the stimulus through a deterministic function of the
inputs (where both the probability distributions and the functions can
be different from neuron to neuron); (iv) optimization of the mutual
information is performed over all these deterministic functions. This
result extends that obtained by Nadal and Parga (1994) who considered
the case of deterministic outputs.