A computation-efficient on-line training algorithm for neurofuzzy networks

Citation
Cw. Chan et al., A computation-efficient on-line training algorithm for neurofuzzy networks, INT J SYST, 31(3), 2000, pp. 297-306
Citations number
17
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE
ISSN journal
00207721 → ACNP
Volume
31
Issue
3
Year of publication
2000
Pages
297 - 306
Database
ISI
SICI code
0020-7721(200003)31:3<297:ACOTAF>2.0.ZU;2-F
Abstract
Neurofuzzy networks are often used to model linear or nonlinear processes, as they can provide some insights into the underlying processes and can be trained using experimental data. As the training of the networks involves i ntensive computation, it is often performed off line. However, it is well k nown that neurofuzzy networks trained off line may not be able to cope succ essully with time-varying processes. To overcome this problem, the weights of the networks are trained on line. In this paper, an on-line training alg orithm with a computation time that is linear in the number of weights is d erived by making full use of the local change proper ty of neurofuzzy netwo rks. It is shown that the estimated weights converge to that obtained from the least-squares method, and that the range of the input domain can be ext ended without retraining the network. Furthermore, it has a better ability in tracking time-varying systems than the recursive least-squares method, s ince in the proposed algorithm a positive definite submatrix is added to th e relevant part of the covariance matrix. The performance of the proposed a lgorithm is illustrated by simulation examples and compared with that obtai ned using the recursive feast-squares method.