An efficient learning algorithm for associative memories

Citation
Yq. Wu et Sn. Batalama, An efficient learning algorithm for associative memories, IEEE NEURAL, 11(5), 2000, pp. 1058-1066
Citations number
20
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
11
Issue
5
Year of publication
2000
Pages
1058 - 1066
Database
ISI
SICI code
1045-9227(200009)11:5<1058:AELAFA>2.0.ZU;2-J
Abstract
Associative memories (AMs) can be implemented using networks with or withou t feedback [1]. In this paper we utilize a two-layer feedforward neural net work and propose a new learning algorithm that efficiently implements the a ssociation rule of a bipolar AM. The hidden layer of the network employs p neurons where p is the number of prototype patterns. In the first layer the input pattern activates at most one hidden layer neuron or "winner." In th e second layer, the "winner" associates the input pattern to title correspo nding prototype pattern. The underlying association principle is minimum Ha mming distance and the proposed scheme can be viewed also as an approximate ly minimum Hamming distance decoder. Theoretical analysis supported by simu lations indicates that, in comparison with other suboptimum minimum Hamming distance association schemes, the proposed structure exhibits the followin g favorable characteristics: I) It operates in one-shot which implies no co nvergence-time requirements; 2) it does not require any feedback; and 3) ou r case studies show that it exhibits superior performance than the popular linear system in a saturated mode (LSSM). The network also exhibits 4) expo nential capacity and 5) easy performance assessment (no asymptotic analysis is necessary). Finally, since it does not require any hidden layer interco nnections or tree-search operations, it exhibits low structural as well as operational complexity.