This paper focuses on on-line learning procedures for locally recurrent neu
ral networks with emphasis on multilayer perceptron (MLP) with infinite imp
ulse response (IIR) synapses and its variations which include generalized o
utput and activation feedback multilayer networks (MLN's). We propose a new
gradient-based procedure called recursive backpropagation (RBP) whose on-l
ine version, causal recursive backpropagation (CRBP), presents some advanta
ges with respect to the other on-line training methods. The new CRBP algori
thm includes as particular cases backpropagation (BP), temporal backpropaga
tion (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among
others, thereby providing a unifying view on gradient calculation technique
s for recurrent networks with local feedback. The only learning method that
has been proposed for locally recurrent networks with no architectural res
triction is the one by Back and Tsoi, The proposed algorithm has better sta
bility and higher speed of convergence with respect to the Back-Tsoi algori
thm, which is supported by the theoretical development and confirmed by sim
ulations. The computational complexity of the CRBP is comparable with that
of the Back-Tsoi algorithm, e,g,, less that a factor of 1.5 for usual archi
tectures and parameter settings. The superior performance of the new algori
thm, however, easily justifies this small increase in computational burden.
In addition, the general paradigms of truncated BPTT and RTRL are applied
to networks with local feedback and compared with the new CRBP method. The
simulations show that CRBP exhibits similar performances and the detailed a
nalysis of complexity reveals that CRBP is much simpler and easier to imple
ment, e,g,, CRBP is local in space and in time while RTRL is not local in s
pace.