DYNAMICAL NEURAL NETWORKS THAT ENSURE EXPONENTIAL IDENTIFICATION ERROR CONVERGENCE

Citation
Eb. Kosmatopoulos et al., DYNAMICAL NEURAL NETWORKS THAT ENSURE EXPONENTIAL IDENTIFICATION ERROR CONVERGENCE, Neural networks, 10(2), 1997, pp. 299-314
Citations number
25
Categorie Soggetti
Mathematical Methods, Biology & Medicine","Computer Sciences, Special Topics","Computer Science Artificial Intelligence",Neurosciences,"Physics, Applied
Journal title
ISSN journal
08936080
Volume
10
Issue
2
Year of publication
1997
Pages
299 - 314
Database
ISI
SICI code
0893-6080(1997)10:2<299:DNNTEE>2.0.ZU;2-2
Abstract
Classical adaptive and robust adaptive schemes, are unable to ensure c onvergence of the identification error to zero, in the case of modelin g errors. Therefore, the usage of such schemes to ''black-box'' identi fication of nonlinear systems ensures--in the best case--bounded ident ification error. In this paper, new learning (adaptive) laws are propo sed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the identification error is ini tially zero, it remains equal to zero during the whole identification process. The parameter convergence properties of the proposed scheme, that is, their capability of converging to the optimal neural network model, is also examined; it is shown to be similar to that of classica l adaptive and parameter estimation schemes. Finally, it is mentioned that the proposed learning laws are not locally implementable, as they make use of global knowledge of signals and parameters. (C) 1997 Else vier Science Ltd. All Rights Reserved.