The conventional dynamic backpropagation (DBP) algorithm proposed by Pineda
does not necessarily imply the stability of the dynamic neural model in th
e sense of Lyapunov during a dynamic weight learning process. A difficulty
with the DBP learning process is thus associated with the stability of the
equilibrium points which have to be checked by simulating the set of dynami
c equations, or else by verifying the stability conditions, after the learn
ing has been completed. To avoid unstable phenomenon during the learning pr
ocess, two new learning schemes, called the multiplier and constrained lear
ning rate algorithms, are proposed in this paper to provide stable adaptive
updating processes for both the synaptic and somatic parameters of the net
work. Based on the explicit stability conditions, in the multiplier method
these conditions are introduced into the iterative error index, and the new
updating formulations contain a set of inequality constraints. In the cons
trained learning rate algorithm, the learning rate is updated at each itera
tive instant by an equation derived using the stability conditions, With th
ese stable DBP algorithms, any analog target pattern may be implemented by
a steady output vector which is a nonlinear vector function of the stable e
quilibrium point. The applicability of the approaches presented is illustra
ted through both analog and binary pattern storage examples.