Classical adaptive and robust adaptive schemes, are unable to ensure c
onvergence of the identification error to zero, in the case of modelin
g errors. Therefore, the usage of such schemes to ''black-box'' identi
fication of nonlinear systems ensures--in the best case--bounded ident
ification error. In this paper, new learning (adaptive) laws are propo
sed which when applied to recurrent high order neural networks (RHONN)
ensure that the identification error converges to zero exponentially
fast, and even more, in the case where the identification error is ini
tially zero, it remains equal to zero during the whole identification
process. The parameter convergence properties of the proposed scheme,
that is, their capability of converging to the optimal neural network
model, is also examined; it is shown to be similar to that of classica
l adaptive and parameter estimation schemes. Finally, it is mentioned
that the proposed learning laws are not locally implementable, as they
make use of global knowledge of signals and parameters. (C) 1997 Else
vier Science Ltd. All Rights Reserved.