One of the goals of perception is to learn to respond to coherence acr
oss space, time and modality. Here we present an abstract framework fo
r the local online unsupervised learning of this coherent information
using multi-stream neural networks. The processing units distinguish b
etween feedforward inputs projected from the environment and the later
al, contextual inputs projected from the processing units of other str
eams. The contextual inputs are used to guide learning towards coheren
t cross-stream structure. The goal of all the learning algorithms desc
ribed is to maximize the predictability between each unit output and i
ts context. Many local cost functions may be applied: e.g. mutual info
rmation, relative entropy, squared error and covariance. Theoretical a
nd simulation results indicate that, of these, the covariance rule (1)
is the only rule that specifically links and learns only those stream
s with coherent information, (2) can be robustly approximated by a Heb
bian rule, (3) is stable with input noise, no pairwise input correlati
ons, and in the discovery of locally less informative components that
are coherent globally. In accordance with the parallel nature of the b
iological substrate, we also show that all the rules scale up with the
number of streams. (C) 1998 Elsevier Science Ltd. All rights reserved
.