Sh. Zak et al., LEARNING AND FORGETTING IN GENERALIZED BRAIN-STATE-IN-A-BOX (BSB) NEURAL ASSOCIATIVE MEMORIES, Neural networks, 9(5), 1996, pp. 845-854
We propose learning and forgetting techniques for the generalized brai
n-state-in-a-box (BSB) based associative memories. A generalization of
the BSB model allows each neuron to have its own bias and the synapti
c weight matrix does not have to be symmetric. A pattern is learned by
a memory if its noisy or an incomplete version presented to the memor
y is mapped back to this pattern. A pattern, previously stored, is for
gotten or deleted from the memory if a stimulus that is a perturbed ve
rsion of the pattern, when presented to the memory, is not mapped back
to this pattern. In this paper we propose ''on-line'' memory storage
and deletion methods using an iterative method of computing the pseudo
-inverse of a given matrix. The proposed methods allow one to ''add''
or ''delete'' a memory pattern by updating, rather than recomputing fr
om scratch, the current synaptic weight matrix in a single step. We fi
rst analyze the desired characteristics of neural network associative
memories. After that, we review the existing methods for design of neu
ral associative memories. Then we discuss the generalized BSB neural m
odel and its possible function as an associative memory and proffer ar
guments in support of using such models for neural associative memorie
s. In particular, the generalized BSB type models are easier to analyz
e, synthesize, and implement than other neural networks. The results o
btained are illustrated by numerical examples. Copyright (C) 1996 Else
vier Science Ltd