Associative memories (AMs) can be implemented using networks with or withou
t feedback [1]. In this paper we utilize a two-layer feedforward neural net
work and propose a new learning algorithm that efficiently implements the a
ssociation rule of a bipolar AM. The hidden layer of the network employs p
neurons where p is the number of prototype patterns. In the first layer the
input pattern activates at most one hidden layer neuron or "winner." In th
e second layer, the "winner" associates the input pattern to title correspo
nding prototype pattern. The underlying association principle is minimum Ha
mming distance and the proposed scheme can be viewed also as an approximate
ly minimum Hamming distance decoder. Theoretical analysis supported by simu
lations indicates that, in comparison with other suboptimum minimum Hamming
distance association schemes, the proposed structure exhibits the followin
g favorable characteristics: I) It operates in one-shot which implies no co
nvergence-time requirements; 2) it does not require any feedback; and 3) ou
r case studies show that it exhibits superior performance than the popular
linear system in a saturated mode (LSSM). The network also exhibits 4) expo
nential capacity and 5) easy performance assessment (no asymptotic analysis
is necessary). Finally, since it does not require any hidden layer interco
nnections or tree-search operations, it exhibits low structural as well as
operational complexity.