Distributed coding at the hidden layer of a multi-layer perceptron (ML
P) endows the network with memory compression and noise tolerance capa
bilities. However, an MLP typically requires slow off-line learning to
avoid catastrophic forgetting in an open input environment. An adapti
ve resonance theory (ART) model is designed to guarantee stable memori
es even with fast on-line learning. However, ART stability typically r
equires winner-take-all coding, which may cause category proliferation
in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to c
ombine the computational advantages of MLP and ART systems in a real-t
ime neural network for supervised learning. An implementation algorith
m here describes one class of dARTMAP networks. This system incorporat
es elements of the unsupervised dART model, as well as new features, i
ncluding a content-addressable memory (CAM) rule for improved contrast
control at the coding field. A dARTMAP system reduces to fuzzy ARTMAP
when coding is winner-take-all. Simulations show that dARTMAP retains
fuzzy ARTMAP accuracy while significantly improving memory compressio
n. (C) 1998 Elsevier Science Ltd. All rights reserved.