Cw. Lee et Ba. Olshausen, A NONLINEAR HEBBIAN NETWORK THAT LEARNS TO DETECT DISPARITY IN RANDOM-DOT STEREOGRAMS, Neural computation, 8(3), 1996, pp. 545-566
An intrinsic limitation of linear, Hebbian networks is that they are c
apable of learning only from the linear pairwise correlations within a
n input stream. To explore what higher forms of structure could be lea
rned with a nonlinear Hebbian network, we constructed a model network
containing a simple form of nonlinearity and we applied it to the prob
lem of learning to detect the disparities present in random-dot stereo
grams. The network consists of three layers, with nonlinear sigmoidal
activation functions in the second-layer units. The nonlinearities all
ow the second layer to transform the pixel-based representation in the
input layer into a new representation based on coupled pairs of left-
right inputs. The third layer of the network then clusters patterns oc
curring on the second-layer outputs according to their disparity via a
standard competitive learning rule. Analysis of the network dynamics
shows that the second-layer units' nonlinearities interact with the He
bbian learning rule to expand the region over which pairs of left-righ
t inputs are stable. The learning rule is neurobiologically inspired a
nd plausible, and the model may shed light on how the nervous system l
earns to use coincidence detection in general.