ADAPTIVE FUZZY NEURAL NETWORKS AS IDENTIFIERS OF DISCRETE-TIME NONLINEAR DYNAMIC-SYSTEMS

Citation
J. Theocharis et G. Vachtsevanos, ADAPTIVE FUZZY NEURAL NETWORKS AS IDENTIFIERS OF DISCRETE-TIME NONLINEAR DYNAMIC-SYSTEMS, Journal of intelligent & robotic systems, 17(2), 1996, pp. 119-168
Citations number
29
Categorie Soggetti
System Science","Computer Science Artificial Intelligence","Robotics & Automatic Control
ISSN journal
09210296
Volume
17
Issue
2
Year of publication
1996
Pages
119 - 168
Database
ISI
SICI code
0921-0296(1996)17:2<119:AFNNAI>2.0.ZU;2-3
Abstract
An adaptive supervised learning scheme is proposed in this paper for t raining Fuzzy Neural Networks (FNN) to identify discrete-time nonlinea r dynamical systems. The FNN constructs are neural-network-based conne ctionist models consisting of several layers that are used to implemen t the functions of a fuzzy logic system. The fuzzy rule base considere d here consists of Takagi-Sugeno IF-THEN rules, where the rule outputs are realized as linear polynomials of the input components. The FNN c onnectionist model is functionally partitioned into three separate par ts, namely, the premise part, which provides the truth values of the r ule preconditional statements, the consequent part providing the rule outputs, and the defuzzification part computing the final output of th e FNN construct. The proposed learning scheme is a two-stage training algorithm that performs both structure and parameter learning, simulta neously. First, the structure learning task determines the proper fuzz y input partitions and the respective precondition matching, and is ca rried out by means of the rule base adaptation mechanism. The rule bas e adaptation mechanism is a self-organizing procedure which progressiv ely generates the proper fuzzy rule base, during training, according t o the operating conditions. Having completed the structure learning st age, the parameter learning is applied using the back-propagation algo rithm; with the objective to adjust the premise/consequent parameters of the FNN so that the desired input/output representation is captured to an acceptable degree of accuracy. The structure/parameter training algorithm exhibits good learning and generalization capabilities as d emonstrated via a series of simulation studies. Comparisons with conve ntional multilayer neural networks indicate the effectiveness of the p roposed scheme.