Learning in a dynamic link network (DLN) is a composition of two dynam
ics: neural dynamics inside layers and link dynamics between layers. B
ased upon a rigorous analysis of the neural dynamics, we find an algor
ithm for selecting the parameters of the DLN in such a way that the ne
ural dynamics preferentially converges to any chosen attractor. This c
ontrol is important because the attractors of the neural dynamics dete
rmine the link dynamics which is the main tool for pattern retrieval.
Thus in terms of our constructive algorithm it is possible to explore
the link dynamics using all kinds of attractors of the neural dynamics
, In particular, we show how to get on-center activity patterns which
have been extensively used in the application of the DLN to image reco
gnition tasks as well as having an important role in the image process
ing of the retina. We propose also a Hopfield-like discretized version
of the neural dynamics which converges to the attractors much faster
than the original DLN.