A class of adaptive resonance theory (ART) models for learning, recogn
ition, and prediction with arbitrarily distributed code representation
s is introduced. Distributed ART neural networks combine the stable fa
st learning capabilities of winner-take-all ART systems with the noise
tolerance and code compression capabilities of multilayer perceptrons
. With a winner-take-all code, the unsupervised model dART reduces to
fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. Wi
th a distributed code, these networks automatically apportion learned
changes according to the degree of activation of each coding node, whi
ch permits fast as well as slow learning without catastrophic forgetti
ng. Distributed ART models replace the traditional neural network path
weight with a dynamic weight equal to the rectified difference betwee
n coding node activation and an adaptive threshold Thresholds increase
monotonically during learning according to a principle of atrophy due
to disuse. However, monotonic change at the synaptic level manifests
itself as bidirectional change at the dynamic level, where the result
of adaptation resembles long-term potentiation (LTP) for single-pulse
or low frequency test inputs but can resemble long-term depression (LT
D) for higher frequency test inputs. This paradoxical behavior is trac
ed to dual computational properties of phasic and tonic coding signal
components. A parallel distributed match-reset-search process also hel
ps stabilize memory. Without the match-reset-search system, dART becom
es a type of distributed competitive learning network. (C) 1997 Elsevi
er Science Ltd.