We study the ability of a simple neural network (a perceptron architec
ture, no hidden units, binary outputs) to process information in the c
ontext of an unsupervised learning task. The network is asked to provi
de the best possible neural representation of a given input distributi
on, according to some criterion taken from information theory. We comp
are various optimization criteria that have been proposed: maximum inf
ormation transmission, minimum redundancy and closeness to factorial c
ode. We show that for the perceptron one can compute the maximum infor
mation that the code (the output neural representation) can convey abo
ut the input. We show that one can use statistical mechanics technique
s, such as replica techniques, to compute the typical mutual informati
on between input and output distributions. More precisely, for a Gauss
ian input source with a given correlation matrix, we compute the typic
al mutual information when the couplings are chosen randomly. We deter
mine the correlations between the synaptic couplings thal maximize the
gain of information. We analyse the results in the case of a one-dime
nsional receptive field.